以太坊源码分析之四挖矿和共识之二挖矿细节及共识

以太坊源码分析之四挖矿和共识之二挖矿细节及共识

这里详细说明一下ethash的DAG,然后再分析一下共识,这样就可以比较全面的呈现以太坊的挖矿和共识。

前面的分析中提到了ethash,那么它内部是怎么实现的呢?

Ethash来源于Dagger Hashimoto算法,Hashimoto算法可以做到抑制ASIC专用挖矿芯片,并且支持轻客户端以及全链数据存储。

dagger算法它利用了有向无环图DAG同时实现了Memory-HardFunction内存计算困难但易于验证Memory-easy verification的特性(我们知道这是哈希算法的重要特性之一)。它的理论依据是基于每个特定场合nonce只需要大型数据总量树的一小部分,并且针对每个特定场合nonce的子树的再计算是被禁止挖矿的。因此,需要存储树但也支持一个独立场合nonce的验证价值。Dagger算法注定要替代现存的仅内存计算困难的算法,例如Scrypt(莱特币采用的),它是计算困难同时验证亦困难的算法,当他们的内存计算困难度增加至真正安全的水平,验证的困难度也随之难上加难。

Dagger Hashimoto算法没有将区块链做为数据源,而是以一个1GB的自定义生成的数据

cache为做为数据源。这个数据集会根据区块链的生成块数N到一定点自动更新。

DAG是每30000个区块的DAG完全不同,125小时的窗口叫做epoch(大约5.2天),需要一点时间来生成。由于DAG只由区块高度决定,它可以被事先生成,如果没有被事先生成,客户端需要等到进程最后来生产区块。如果客户端没有预生成并提前缓存DAG,网络可能会在每个epoch过渡经历大规模区块延迟。注意不必要生成DAG以验证工作量证明,它可以在低CPU和小内存的状态下被验证。

在特殊情况下,从零开始创建节点的时候,只有在为现存epoch创建DAG的时候才会开始挖矿

(需要说明的是,这个块数和大小都会因为根据版本和实际发展进行调整而不同)。

它的算法过程一般如下:

1、创建一个种子seed,通过扫描块头为每个块计算出来那个点。

2、根据这个种子seed,可以计算一个16MB的伪随机缓存cache,轻客户端存储这个缓存。

3、从这个缓存cache中,我们能够生成一个1GB的数据集,该数据集中的每一项都取决于缓存中的一小部分。完整客户端和矿工存储了这个数据集,数据集随着时间线性增长。

4、挖矿工作包含了抓取数据集的随机片以及运用哈希函数计算他们。校验工作能够在低内存的环境下完成,通过使用缓存再次生成所需的特性数据集的片段,所以你只需要存储缓存cache即可。

// Config are the configuration parametersof the ethash.

type Config struct {

         CacheDir       string // 缓存位置

         CachesInMem    int  //在内存中缓存的数量

         CachesOnDisk   int   //在硬盘中缓存的数量

         DatasetDir     string //生产数据集位置

         DatasetsInMem  int   //生产数据集在内存的位置

         DatasetsOnDiskint  //生产数据集在硬盘的位置

         PowMode        Mode //共识模式,是真,假的,还是测试的等

}

// Ethash is a consensus engine based onproot-of-work implementing the ethash

// algorithm.

type Ethash struct {

         configConfig

    //缓冲和数据集在内存中的缓冲,类似于ROCKSDB等的LRUCache

         caches   *lru // In memory caches to avoidregenerating too often

         datasets*lru // In memory datasets to avoid regenerating too often

         //Mining related fields:挖矿相关

         //Nonces挖矿的随机值

         rand     *rand.Rand    // Properly seeded random source for nonces

         threads  int          // Number of threads to mine on if mining

   //更新挖矿通道,在上文中分析过

         update   chan struct{} // Notification channel toupdate mining parameters

   //跟踪测试平均哈希率,用来处理算力(难度)

         hashratemetrics.Meter // Meter tracking the average hashrate

         //The fields below are hooks for testing测试用

         shared    *Ethash       // Shared PoW verifier to avoid cacheregeneration

    //未通过POW检查的区块号(包含fake模式)

         fakeFail  uint64       // Block number which fails PoWcheck even in fake mode

   //验证工作返回消息前的休眠延迟时间

         fakeDelaytime.Duration // Time delay to sleep for before returning from verify

         locksync.Mutex // Ensures thread safety for the in-memory caches and mining fields

}

在前面的挖矿的主角里有一个Seal的函数,它其实是真正做POW的开始的源头。

// Seal implements consensus.Engine,attempting to find a nonce that satisfies

// the block's difficulty requirements.

func (ethash *Ethash) Seal(chainconsensus.ChainReader, block *types.Block, stop <-chan struct{})(*types.Block, error) {

         // Ifwe're running a fake PoW, simply return a 0 nonce immediately如果是FAKE模式,直

//接返回0

         ifethash.config.PowMode == ModeFake || ethash.config.PowMode == ModeFullFake {

                   header:= block.Header()

                   header.Nonce,header.MixDigest = types.BlockNonce{}, common.Hash{}

                   returnblock.WithSeal(header), nil

         }

         //If we're running a shared PoW, delegate sealing to it

    //共享pow的话,则转到它的共享对象执行Seal操作

         ifethash.shared != nil {

                   returnethash.shared.Seal(chain, block, stop)

         }

         //Create a runner and the multiple search threads it directs

    //多创建几个线程来查找

         abort:= make(chan struct{})

         found:= make(chan *types.Block)

         ethash.lock.Lock()

         threads:= ethash.threads

         ifethash.rand == nil {

       //刚刚说到的种子

                   seed,err := crand.Int(crand.Reader, big.NewInt(math.MaxInt64))

                   iferr != nil {

                            ethash.lock.Unlock()

                            returnnil, err

                   }

       //拿到随机源

                   ethash.rand= rand.New(rand.NewSource(seed.Int64()))

         }

         ethash.lock.Unlock()

         ifthreads == 0 {

                   threads= runtime.NumCPU()

         }

         ifthreads < 0 {

                   threads= 0 // Allows disabling local mining without extra logic around local/remote

         }

         varpend sync.WaitGroup

         fori := 0; i < threads; i++ {

                   pend.Add(1)

                   gofunc(id int, nonce uint64) {

                            deferpend.Done()

           //挖矿者开始干活

                            ethash.mine(block,id, nonce, abort, found)

                   }(i,uint64(ethash.rand.Int63()))

         }

         //Wait until sealing is terminated or a nonce is found

    //等待挖矿成功或者其它意外情况

         varresult *types.Block

         select{

         case<-stop:

                   //Outside abort, stop all miner threads

                   close(abort)

         caseresult = <-found://成功

                   //One of the threads found a block, abort all others

                   close(abort)

         case<-ethash.update: //更新了查找请求,按新情况再来

                   //Thread count was changed on user request, restart

                   close(abort)

                   pend.Wait()

                   returnethash.Seal(chain, block, stop)

         }

         //Wait for all miners to terminate and return the block

         pend.Wait()

         returnresult, nil

}

真正的挖矿者来了:

// mine is the actual proof-of-work minerthat searches for a nonce starting from

// seed that results in correct final blockdifficulty.

func (ethash *Ethash) mine(block*types.Block, id int, seed uint64, abort chan struct{}, found chan*types.Block) {

         //Extract some data from the header

         var(

                   header  = block.Header()

                   hash    = header.HashNoNonce().Bytes()

                   target  = new(big.Int).Div(maxUint256,header.Difficulty)

                   number  = header.Number.Uint64()

                   dataset= ethash.dataset(number)

         )

   //提取块头资源和nonce来开始计算HASH

         //Start generating random nonces until we abort or find a good one

         var(

                   attempts= int64(0)

                   nonce    = seed

         )

         logger:= log.New("miner", id)

         logger.Trace("Startedethash search for new nonces", "seed", seed)

search:

         for{

                   select{

                   case<-abort:

                            //Mining terminated, update stats and abort

                            logger.Trace("Ethashnonce search aborted", "attempts", nonce-seed)

                            ethash.hashrate.Mark(attempts)  //更新平均哈希率的状态

                            breaksearch

                   default:

                            //We don't have to update hash rate on every nonce, so update after after 2^Xnonces

                            attempts++

                            if(attempts % (1 << 15)) == 0 {

                                     ethash.hashrate.Mark(attempts)

                                     attempts= 0

                            }

                            //Compute the PoW value of this nonce

                            digest,result := hashimotoFull(dataset.dataset, hash, nonce)

                            ifnew(big.Int).SetBytes(result).Cmp(target) <= 0 {

                                     //Correct nonce found, create a new header with it

                                     header= types.CopyHeader(header)

                                     header.Nonce= types.EncodeNonce(nonce)

                                     header.MixDigest= common.BytesToHash(digest)

                                     //Seal and return a block (if still needed)

                                     select{

                                     casefound <- block.WithSeal(header):

                                               logger.Trace("Ethashnonce found and reported", "attempts", nonce-seed,"nonce", nonce)

                                     case<-abort:

                                               logger.Trace("Ethashnonce found but discarded", "attempts", nonce-seed,"nonce", nonce)

                                     }

                                     breaksearch

                            }

                            nonce++

                   }

         }

         //Datasets are unmapped in a finalizer. Ensure that the dataset stays live

         //during sealing so it's not unmapped while being read.

         runtime.KeepAlive(dataset)

}

这样ethash和挖矿的细节就分析完成了。黄皮书上也提了,不必真正生成DAG,所以在实际的算法过程中,以太坊可能并没有真正生成这个DAG。

再详细的HASH算法的源码的位置都在algorithm.go中:

// hashimoto aggregates data from the fulldataset in order to produce our final

// value for a particular header hash andnonce.

func hashimoto(hash []byte, nonce uint64,size uint64, lookup func(index uint32) []uint32) ([]byte, []byte) {

         //Calculate the number of theoretical rows (we use one buffer nonetheless)

         rows:= uint32(size / mixBytes)

         //Combine header+nonce into a 64 byte seed

         seed:= make([]byte, 40)

         copy(seed,hash)

         binary.LittleEndian.PutUint64(seed[32:],nonce)

         seed= crypto.Keccak512(seed)

         seedHead:= binary.LittleEndian.Uint32(seed)

         //Start the mix with replicated seed

         mix:= make([]uint32, mixBytes/4)

         fori := 0; i < len(mix); i++ {

                   mix[i]= binary.LittleEndian.Uint32(seed[i%16*4:])

         }

         //Mix in random dataset nodes

         temp:= make([]uint32, len(mix))

         fori := 0; i < loopAccesses; i++ {

                   parent:= fnv(uint32(i)^seedHead, mix[i%len(mix)]) % rows

                   forj := uint32(0); j < mixBytes/hashBytes; j++ {

                            copy(temp[j*hashWords:],lookup(2*parent+j))

                   }

                   fnvHash(mix,temp)

         }

         //Compress mix

         fori := 0; i < len(mix); i += 4 {

                   mix[i/4]= fnv(fnv(fnv(mix[i], mix[i+1]), mix[i+2]), mix[i+3])

         }

         mix= mix[:len(mix)/4]

         digest:= make([]byte, common.HashLength)

         fori, val := range mix {

                   binary.LittleEndian.PutUint32(digest[i*4:],val)

         }

         returndigest, crypto.Keccak256(append(seed, digest...))

}

// hashimotoLight aggregates data from thefull dataset (using only a small

// in-memory cache) in order to produce ourfinal value for a particular header

// hash and nonce.

func hashimotoLight(size uint64, cache[]uint32, hash []byte, nonce uint64) ([]byte, []byte) {

         keccak512:= makeHasher(sha3.NewKeccak512())

         lookup:= func(index uint32) []uint32 {

                   rawData:= generateDatasetItem(cache, index, keccak512)

                   data:= make([]uint32, len(rawData)/4)

                   fori := 0; i < len(data); i++ {

                            data[i]= binary.LittleEndian.Uint32(rawData[i*4:])

                   }

                   returndata

         }

         returnhashimoto(hash, nonce, size, lookup)

}

// hashimotoFull aggregates data from thefull dataset (using the full in-memory

// dataset) in order to produce our finalvalue for a particular header hash and

// nonce.

根据传入的随机值和HASH来计算加密结果

func hashimotoFull(dataset []uint32, hash[]byte, nonce uint64) ([]byte, []byte) {

         lookup:= func(index uint32) []uint32 {

                   offset:= index * hashWords

                   returndataset[offset : offset+hashWords]

         }

         returnhashimoto(hash, nonce, uint64(len(dataset))*4, lookup)

}

下面是FNV的HASH算法:

// fnv is an algorithm inspired by the FNVhash, which in some cases is used as

// a non-associative substitute for XOR.Note that we multiply the prime with

// the full 32-bit input, in contrast withthe FNV-1 spec which multiplies the

// prime with one byte (octet) in turn.

func fnv(a, b uint32) uint32 {

         returna*0x01000193 ^ b

}

// fnvHash mixes in data into mix using theethash fnv method.

func fnvHash(mix []uint32, data []uint32) {

         fori := 0; i < len(mix); i++ {

                   mix[i]= mix[i]*0x01000193 ^ data[i]

         }

}

注释还是写得比较清楚的。

在POW算法中,挖矿和共识是无法分的太清楚的,所以有一些挖矿的细节展现到共识一起来分析,这样会更清晰一些。

Ethash/consensus.go

// Finalize implements consensus.Engine,accumulating the block and uncle rewards,

// setting the final state and assemblingthe block.

//这里开始计算奖励,其实就是上链了

func (ethash *Ethash) Finalize(chainconsensus.ChainReader, header *types.Header, state *state.StateDB, txs[]*types.Transaction, uncles []*types.Header, receipts []*types.Receipt)(*types.Block, error) {

         //Accumulate any block and uncle rewards and commit the final state root

         accumulateRewards(chain.Config(),state, header, uncles)  //计算gas

   //生成MPT

         header.Root= state.IntermediateRoot(chain.Config().IsEIP158(header.Number))

         //Header seems complete, assemble into a block and return

         returntypes.NewBlock(header, txs, uncles, receipts), nil   //生成新块

}

// AccumulateRewards credits the coinbaseof the given block with the mining

// reward. The total reward consists of thestatic block reward and rewards for

// included uncles. The coinbase of eachuncle block is also rewarded.

//真正计算

func accumulateRewards(config*params.ChainConfig, state *state.StateDB, header *types.Header, uncles[]*types.Header) {

         //Select the correct block reward based on chain progression

         blockReward:= FrontierBlockReward

         ifconfig.IsByzantium(header.Number) {

                   blockReward= ByzantiumBlockReward

         }

         //Accumulate the rewards for the miner and any included uncles

         reward:= new(big.Int).Set(blockReward)

         r:= new(big.Int)

         for_, uncle := range uncles {

                   r.Add(uncle.Number,big8)

                   r.Sub(r,header.Number)

                   r.Mul(r,blockReward)

                   r.Div(r,big8)

                   state.AddBalance(uncle.Coinbase,r)

                   r.Div(blockReward,big32)

                   reward.Add(reward,r)

         }

         state.AddBalance(header.Coinbase,reward)

}

// NewBlock creates a new block. The inputdata is copied,

// changes to header and to the fieldvalues will not affect the

// block.

//

// The values of TxHash, UncleHash,ReceiptHash and Bloom in header

// are ignored and set to values derivedfrom the given txs, uncles

// and receipts.

func NewBlock(header *Header, txs []*Transaction,uncles []*Header, receipts []*Receipt) *Block {

         b:= &Block{header: CopyHeader(header), td: new(big.Int)}

         //TODO: panic if len(txs) != len(receipts)

         iflen(txs) == 0 {

       //计算交易的MPT的hash

                   b.header.TxHash= EmptyRootHash

         }else {

                   b.header.TxHash= DeriveSha(Transactions(txs))

                   b.transactions= make(Transactions, len(txs))

                   copy(b.transactions,txs)

         }

   //计算交易记帐收据的MPT的hash

         iflen(receipts) == 0 {

                   b.header.ReceiptHash= EmptyRootHash

         }else {

                   b.header.ReceiptHash= DeriveSha(Receipts(receipts))

                   b.header.Bloom= CreateBloom(receipts)

         }

    //叔块的mpt的hash

         iflen(uncles) == 0 {

                   b.header.UncleHash= EmptyUncleHash

         }else {

                   b.header.UncleHash= CalcUncleHash(uncles)

                   b.uncles= make([]*Header, len(uncles))

                   fori := range uncles {

                            b.uncles[i]= CopyHeader(uncles[i])

                   }

         }

         returnb

}

// WriteBlockWithState writes the block andall associated state to the database.

func (bc *BlockChain)WriteBlockWithState(block *types.Block, receipts []*types.Receipt, state*state.StateDB) (status WriteStatus, err error) {

         bc.wg.Add(1)

         deferbc.wg.Done()

         //Calculate the total difficulty of the block

         ptd:= bc.GetTd(block.ParentHash(), block.NumberU64()-1)

         ifptd == nil {

                   returnNonStatTy, consensus.ErrUnknownAncestor

         }

         //Make sure no inconsistent state is leaked during insertion

         bc.mu.Lock()

         deferbc.mu.Unlock()

         localTd:= bc.GetTd(bc.currentBlock.Hash(), bc.currentBlock.NumberU64())

         externTd:= new(big.Int).Add(block.Difficulty(), ptd)

         //Irrelevant of the canonical status, write the block itself to the database

         iferr := bc.hc.WriteTd(block.Hash(), block.NumberU64(), externTd); err != nil {

                   returnNonStatTy, err

         }

         //Write other block data using a batch.

         batch:= bc.db.NewBatch()

         iferr := WriteBlock(batch, block); err != nil {

                   returnNonStatTy, err

         }

         root,err := state.Commit(bc.chainConfig.IsEIP158(block.Number()))

         iferr != nil {

                   returnNonStatTy, err

         }

   //更新状态到MPT的缓存中

         triedb:= bc.stateCache.TrieDB()

         //If we're running an archive node, always flush

         ifbc.cacheConfig.Disabled {

                   iferr := triedb.Commit(root, false); err != nil {

                            returnNonStatTy, err

                   }

         }else {

                   //Full but not archive node, do proper garbage collection

                   triedb.Reference(root,common.Hash{}) // metadata reference to keep trie alive

                   bc.triegc.Push(root,-float32(block.NumberU64()))

                   ifcurrent := block.NumberU64(); current > triesInMemory {

                            //Find the next state trie we need to commit

                            header:= bc.GetHeaderByNumber(current - triesInMemory)

                            chosen:= header.Number.Uint64()

                            //Only write to disk if we exceeded our memory allowance *and* also have at

                            //least a given number of tries gapped.

                            var(

                                     size  = triedb.Size()

                                     limit= common.StorageSize(bc.cacheConfig.TrieNodeLimit) * 1024 * 1024

                            )

                            ifsize > limit || bc.gcproc > bc.cacheConfig.TrieTimeLimit {

                                     //If we're exceeding limits but haven't reached a large enough memory gap,

                                     //warn the user that the system is becoming unstable.

                                     ifchosen < lastWrite+triesInMemory {

                                               switch{

                                               casesize >= 2*limit:

                                                        log.Warn("Statememory usage too high, committing", "size", size,"limit", limit, "optimum",float64(chosen-lastWrite)/triesInMemory)

                                               casebc.gcproc >= 2*bc.cacheConfig.TrieTimeLimit:

                                                        log.Info("Statein memory for too long, committing", "time", bc.gcproc,"allowance", bc.cacheConfig.TrieTimeLimit, "optimum",float64(chosen-lastWrite)/triesInMemory)

                                               }

                                     }

                                     //If optimum or critical limits reached, write to disk

                                     ifchosen >= lastWrite+triesInMemory || size >= 2*limit || bc.gcproc >=2*bc.cacheConfig.TrieTimeLimit {

                                               triedb.Commit(header.Root,true)

                                               lastWrite= chosen

                                               bc.gcproc= 0

                                     }

                            }

                            //Garbage collect anything below our required write retention

                            for!bc.triegc.Empty() {

                                     root,number := bc.triegc.Pop()

                                     ifuint64(-number) > chosen {

                                               bc.triegc.Push(root,number)

                                               break

                                     }

                                     triedb.Dereference(root.(common.Hash),common.Hash{})

                            }

                   }

         }

    //写入最终的交易收据

         iferr := WriteBlockReceipts(batch, block.Hash(), block.NumberU64(), receipts);err != nil {

                   returnNonStatTy, err

         }

         //If the total difficulty is higher than our known, add it to the canonical chain

         //Second clause in the if statement reduces the vulnerability to selfish mining.

         //Please refer to http://www.cs.cornell.edu/~ie53/publications/btcProcFC.pdf

         reorg:= externTd.Cmp(localTd) > 0

         if!reorg && externTd.Cmp(localTd) == 0 {

                   //Split same-difficulty blocks by number, then at random

                   reorg= block.NumberU64() < bc.currentBlock.NumberU64() || (block.NumberU64() ==bc.currentBlock.NumberU64() && mrand.Float64() < 0.5)

         }

         ifreorg {

                   //Reorganise the chain if the parent is not the head block

                   ifblock.ParentHash() != bc.currentBlock.Hash() {

                            iferr := bc.reorg(bc.currentBlock, block); err != nil {

                                     returnNonStatTy, err

                            }

                   }

                   //Write the positional metadata for transaction and receipt lookups

                   iferr := WriteTxLookupEntries(batch, block); err != nil {

                            returnNonStatTy, err

                   }

                   //Write hash preimages

                   iferr := WritePreimages(bc.db, block.NumberU64(), state.Preimages()); err != nil{

                            returnNonStatTy, err

                   }

                   status= CanonStatTy

         }else {

                   status= SideStatTy

         }

         iferr := batch.Write(); err != nil {

                   returnNonStatTy, err

         }

         //Set new head.

         ifstatus == CanonStatTy {

                   bc.insert(block)

         }

         bc.futureBlocks.Remove(block.Hash())

         returnstatus, nil

}

externTd > localTd:说明新挖出的区块是有效块,有资格作为链头

externTd < localTd:说明已经有人在你之前挖出了新区块,且总难度更高,你挖出的是叔块

externTd = localTd:说明已经有人在你之前挖出了新区块,且总难度和你相同。这种情况应该极少,如果出现的话,通过一个随机数来决策是否需要接受新挖出来的块作为链头

在处理完成上面的各种事务后,将区块bc.insert(block)插入,然后广播消息:

           self.mux.Post(core.NewMinedBlockEvent{Block: block})

eth/handler.go:

func (pm *ProtocolManager)minedBroadcastLoop() {

   // automatically stops if unsubscribe

   for obj := range pm.minedBlockSub.Chan() {

       switch ev := obj.Data.(type) {

       case core.NewMinedBlockEvent:

           pm.BroadcastBlock(ev.Block, true) // First propagate block to peers

           pm.BroadcastBlock(ev.Block, false) // Only then announce to the rest

       }

    }

}

这样生产块和上链的细节就全部分析完成了。顺便说一下以太坊的挖矿难度计算:

// CalcDifficulty is the difficultyadjustment algorithm. It returns

// the difficulty that a new block shouldhave when created at time

// given the parent block's time anddifficulty.

func CalcDifficulty(config*params.ChainConfig, time uint64, parent *types.Header) *big.Int {

         next:= new(big.Int).Add(parent.Number, big1)

         switch{

         caseconfig.IsByzantium(next):

      

        //新研发的Byzantium

                   returncalcDifficultyByzantium(time, parent)

         caseconfig.IsHomestead(next):

       //正在使用的Homestead

                   returncalcDifficultyHomestead(time, parent)

         default:

       //老的版本Frontier

                   returncalcDifficultyFrontier(time, parent)

         }

}

三种计算难度的规则,分别对应以太坊的三个主要版本:已经成为历史的Frontier、正在使用的Homestead和将要发布的拜占庭方法

// calcDifficultyHomestead is thedifficulty adjustment algorithm. It returns

// the difficulty that a new block shouldhave when created at time given the

// parent block's time and difficulty. Thecalculation uses the Homestead rules.

func calcDifficultyHomestead(time uint64,parent *types.Header) *big.Int {

         //https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2.md

         //algorithm:

         //diff = (parent_diff +

         //         (parent_diff / 2048 * max(1 -(block_timestamp - parent_timestamp) // 10, -99))

         //       ) + 2^(periodCount - 2)

         bigTime:= new(big.Int).SetUint64(time)

         bigParentTime:= new(big.Int).Set(parent.Time)

         //holds intermediate values to make the algo easier to read & audit

         x:= new(big.Int)

         y:= new(big.Int)

         //1 - (block_timestamp - parent_timestamp) // 10

         x.Sub(bigTime,bigParentTime)

         x.Div(x,big10)

         x.Sub(big1,x)

         //max(1 - (block_timestamp - parent_timestamp) // 10, -99)

         ifx.Cmp(bigMinus99) < 0 {

                   x.Set(bigMinus99)

         }

         //(parent_diff + parent_diff // 2048 * max(1 - (block_timestamp -parent_timestamp) // 10, -99))

         y.Div(parent.Difficulty,params.DifficultyBoundDivisor)

         x.Mul(y,x)

         x.Add(parent.Difficulty,x)

         //minimum difficulty can ever be (before exponential factor)

         ifx.Cmp(params.MinimumDifficulty) < 0 {

                   x.Set(params.MinimumDifficulty)

         }

         //for the exponential factor

         periodCount:= new(big.Int).Add(parent.Number, big1)

         periodCount.Div(periodCount,expDiffPeriod)

         //the exponential factor, commonly referred to as "the bomb"

         //diff = diff + 2^(periodCount - 2)

         ifperiodCount.Cmp(big1) > 0 {

                   y.Sub(periodCount,big2)

                   y.Exp(big2,y, nil)

                   x.Add(x,y)

         }

         returnx

}

也就是说:

block_diff = parent_diff + 难度调整 + 难度

难度调整 = parent_diff // 2048 * MAX(1 - (block_timestamp -parent_timestamp) // 10, -99)

难度系数 = INT(2**((block_number // 100000) - 2))

另外,区块难度不能低于以太坊的创世区块,创世区块的难度为131072,这是以太坊难度的下限。

参考了网上不少资料,写以太坊的东西真是又费时间,又费精力。

猜你喜欢

转载自blog.csdn.net/fpcc/article/details/80871005