Blockchain learning three

Currently, there are two main methods of validating the blockchain as a client: full nodes and SPV clients.

full node

The first and most secure model is to ensure the validity of the blockchain by downloading and validating blocks from the genesis block all the way up to the most recently discovered block.

To deceive the client, the attacker needs to provide a complete alternative blockchain history, which is more difficult and computationally expensive than the current "real" chain, because most of the good nodes will calculate the real chain. Most cumulative proof-of-work is by definition the "real" chain. The ability to fool full nodes after 6 confirmations becomes prohibitively expensive due to the computational difficulty required to generate new blocks at the end of the chain. This form of verification is highly resistant to Sybil attacks - only one honest node is required to receive and verify the full state of the "real" blockchain.

insert image description here

Simplified Payment Verification (SPV)

Another approach detailed in the original Bitcoin paper, clients only download block headers during an initial sync and then request transactions from full nodes as needed. This scales linearly with the height of the blockchain, at only 80 bytes per block header, or about 4.2MB per year, independent of the total block size.

As described in the white paper, the Merkle root and Merkle branch in the block header can prove to the SPV client that the relevant transaction is embedded in a block in the blockchain. This does not guarantee the validity of embedded transactions. Instead, it demonstrates the amount of work required to perform a double-spending attack.

The depth of blocks in the blockchain corresponds to the cumulative difficulty of building a branch on top of that particular block. The SPV client knows the merkle root and related transaction information, and requests the corresponding merkle branch from the full node. Once the merkle branch is retrieved, proving that there is a transaction in the block, the SPV client can use the block depth as a proof indicator of transaction validity and security. The cost of an attack on a user by a malicious node inserting an invalid transaction increases with the depth of the block, since only malicious nodes will mine this fake chain.

Potential SPV Weaknesses

First, while SPV clients cannot be easily tricked into thinking a transaction is in a block, the reverse cannot be. A full node can simply lie by omission, leading SPV clients to believe that the transaction didn't happen. This can be considered a form of denial of service. One mitigation strategy is to connect to multiple full nodes and send requests to each node. However, this can be resolved through network partitions or Sybil attacks, since identities are essentially free and can be bandwidth intensive. Care must be taken to ensure that clients do not disconnect from honest nodes.

Second, SPV clients only request transactions from full nodes corresponding to the keys they own. This can consume a lot of bandwidth if the SPV client downloads all chunks and then discards unneeded chunks. This allows the full node to have a full view of the public address corresponding to the user if they just ask the full node for a block with a specific transaction. This is a big privacy breach

To alleviate the latter problem, Bloom filters have been used to deobfuscate and compress data requests.

bloom filter

A Bloom filter is a space-efficient probabilistic data structure used to test whether an element is in a set. This data structure achieves excellent data compression at the cost of a stated false positive rate.

Bloom filters start with an array of n bits, all set to 0. Choose a set of k random hash functions, each of which outputs an integer between 1 and n.

When an element is added to a Bloom filter, the element is hashed k times each, and for every k outputs, the corresponding Bloom filter bit at that index is set to 1.

Querying for Bloom filters is done by using the same hash function as before. If all k bits accessed in a bloom filter are set to 1, this indicates that the element is in the set with high probability. Obviously, k indices can be set to 1 by adding a combination of other elements in the domain, but the parameter allows the user to choose an acceptable false positive rate.

Elements can only be removed by Bloom filters

Application of Bloom filter

The false positive rate is an adjustable parameter that trades off privacy level and bandwidth. The SPV client creates its bloom filter and sends it to the full node with a message filterload, which sets the filter for which transactions are required. Full nodes will send modified Merkle blocks. A merkle block is a block header with only the merkle branch associated with the Bloom filter set.

SPV clients can add not only transactions as elements to filters, but also public keys, data from signature scripts and public key scripts, and more. This makes it possible to verify P2SH transactions.

If a user is more privacy-conscious, he can set the Bloom filter to include more false positives, at the expense of additional bandwidth for validating transactions. If a user has a tight bandwidth budget, he can set the false positive rate low, knowing that this will allow full nodes to clearly understand which transactions are associated with his customers.

  • BitcoinJ , a Java implementation of Bitcoin based on the SPV security model and Bloom filters.

  • BIP37

mining

Mining adds new blocks to the blockchain, making transaction history difficult to modify.

Mining today takes two forms:

  • Solo mining, where the miner tries to generate a new block by himself, and the rewards of block rewards and transaction fees are fully owned by him, which allows him to receive large payments with higher variance (longer time between rewards )

  • Pooled mining, where miners pool resources with other miners to find blocks more frequently, the rewards are shared among the pool miners, roughly related to the hash power they each contribute, allowing miners to receive small amounts with low variance Payments (less time between earnings).

insert image description here

In pooled mining, the mining pool sets a target threshold several orders of magnitude lower than the actual network difficulty. This will cause the mining hardware to return many block headers, and these useless block headers are used as credentials for miners to work.
When useful block headers are mined, the miner will pay dividends based on the number of useless block headers submitted by the miners.
For example, if the target threshold of the mining pool is greater than If the network target threshold is 100 times lower, an average of 100 shares need to be generated to create a successful block, so the mining pool can pay 1/100 of its payment for each share received. Different mining pools use based on this basic share system Different reward distribution systems.

Guess you like

Origin blog.csdn.net/qq_45256489/article/details/122708075