Decode Lido

Decode Lido

Tags
Web3
LST
Published
February 18, 2024
Author
Senn

Introduction

Lido is a decentralized finance (DeFi) protocol that offers liquid staking solutions for various blockchain networks, including Ethereum, Polygon. It addresses a fundamental challenge faced by participants in Proof of Stake (PoS) networks: the illiquidity of staked assets.

What Problem Does Lido Solve?

In traditional PoS mechanisms, participants (validators) stake their cryptocurrency to secure the network and, in return, earn staking rewards. However, staked assets are often locked up for a significant period, reducing liquidity and flexibility for the staker. This lock-up period can deter potential validators due to the opportunity cost of not being able to use their staked assets for other investment opportunities or transactions.

How Lido Works

  1. Tokenization of Staked Assets: When users stake their assets with Lido, they receive tokenized versions of their staked assets in return. For example, when you stake Ethereum (ETH) with Lido, you receive stETH (staked ETH) tokens. These tokens represent your staked ETH plus any staking rewards earned. The key advantage is that stETH can be traded, used in DeFi applications, or even as collateral in lending platforms, providing liquidity to otherwise locked assets.
  1. Decentralized and Trustless: Lido operates in a decentralized manner, leveraging smart contracts to automate staking and rewards distribution. It doesn't rely on any single validator or entity, instead it uses oracle consensus system to reduce centralization risks and increase security for stakers.
  1. Rewards and Compounding: The tokenized staked assets (e.g., stETH) are designed to increase in value over time as staking rewards are earned, allowing holders to benefit from compounding rewards automatically without needing to manually claim or restake their rewards.
  1. Broad Accessibility: By lowering the entry barrier to staking, Lido enables users with smaller amounts of cryptocurrency to participate in staking and earn rewards, democratizing access to staking benefits.

Architecture Diagram

notion image

Stake (stETH, Lido.sol)

When stake in the Lido’s stake page, you need to send a tx to call Lido.submit
notion image
 
Lido.sol is the contract of stETH. submitfunction basically accept user’s ether, and do certain restriction checks, than mint the corresponding shares to user. Yes, shares not token. Because staked Ether will be used by validators to participate the consensus layer, validators will be rewarded or slashed based on their behavior. So the ether in the pool will change overtime. And stakers will get their ethers based on their shares. Just like in a joint-stock company, shareholders receive benefits based on their shares.
In the submitfunctionstaker can choose to attach his/her referral’s address which will be recorded in the Submitted event .

Staking rate control

Lido.sol uses StakeLimitState to control the stake limitation. Basically, Lido ensures that:
  • there is maximum amount of Ether that can be staked in a single transaction at any given time
  • the overall rate of stake per block is stable.
prevStakeBlockNumber
latest stake time(block number)
prevStakeLimit
unused stake limit of previous blocks
maxStakeLimitGrowthBlocks
number of blocks over which the maxStakeLimit functions.
maxStakeLimit
the max stake limitation over maxStakeLimitGrowthBlocks amount of blocks.
Note that Lido.sol is a proxy, so Lido.sol uses storage slot and library to fetch some critical data.
Before mint shares to the staker, Lido.sol will do some checks:
  • the msg.value shouldn’t be zero.
  • staking is not paused (If StakeLimitState.prevStakeBlockNumber is set to be 0, then stakiing has been pasued, otherwise not).
  • staking limitation won’t be passed.
//lido/lido-dao/contracts/0.4.24/Lido.sol /** * @notice Send funds to the pool with optional _referral parameter * @dev This function is alternative way to submit funds. Supports optional referral address. * @return Amount of StETH shares generated */ function submit(address _referral) external payable returns (uint256) { return _submit(_referral); } /** * @dev Internal representation struct (slot-wide) */ struct Data { uint32 prevStakeBlockNumber; // block number of the previous stake submit uint96 prevStakeLimit; // limit value (<= `maxStakeLimit`) obtained on the previous stake submit uint32 maxStakeLimitGrowthBlocks; // limit regeneration speed expressed in blocks uint96 maxStakeLimit; // maximum limit value } /** * @dev Process user deposit, mints liquid tokens and increase the pool buffer * @param _referral address of referral. * @return amount of StETH shares generated */ function _submit(address _referral) internal returns (uint256) { require(msg.value != 0, "ZERO_DEPOSIT"); StakeLimitState.Data memory stakeLimitData = STAKING_STATE_POSITION.getStorageStakeLimitStruct(); // There is an invariant that protocol pause also implies staking pause. // Thus, no need to check protocol pause explicitly. require(!stakeLimitData.isStakingPaused(), "STAKING_PAUSED"); if (stakeLimitData.isStakingLimitSet()) { uint256 currentStakeLimit = stakeLimitData.calculateCurrentStakeLimit(); require(msg.value <= currentStakeLimit, "STAKE_LIMIT"); STAKING_STATE_POSITION.setStorageStakeLimitStruct(stakeLimitData.updatePrevStakeLimit(currentStakeLimit - msg.value)); } uint256 sharesAmount = getSharesByPooledEth(msg.value); _mintShares(msg.sender, sharesAmount); _setBufferedEther(_getBufferedEther().add(msg.value)); emit Submitted(msg.sender, msg.value, _referral); _emitTransferAfterMintingShares(msg.sender, sharesAmount); return sharesAmount; }
 
Lido.sol uses stakeLimitData.calculateCurrentStakeLimit() to calculate the stake cap of current stake transaction. After each staking operation, prevStakeBlockNumber and prevStakeLimit are updated accordingly.
msl
maxStakeLimit
mslgb
maxStakeLimitGrowthBlocks
psl
prevStakeLimit
psbn
prevStakeBlockNumber
Here is a basic example illustrate this formula:
Assume we set
  • maxStakeLimit to be 1,000 Ether.
  • maxStakeLimitGrowthBlocks to be 10.
  • prevStakeBlockNumberto be 2,000.
  • prevStakeLimitto be 0.
This setting means that we want to control the rate of stake to be close to 10 Ether per block with max limit of 1,000 Ether from block number 2,000.
If there is no staking from 2,000 to 2,019 blocks, And Alice chooses to stake on the 2,020 block, then Alice can stake up to 200 Ethers
If Alice chooses to stake 100 Ethers, than the prevStakeBlockNumberwill be updated to 2,020, and the prevStakeLimitwill be updated to 100E = ( , the prevStakeLimitwill be added to the next stake operation. When Bob try to stake on 2,030 block, the stake cap will be 300 Ethers:
//lido/lido-dao/contracts/0.4.24/lib/StakeLimitUtils.sol /** * @notice Calculate stake limit for the current block. * @dev using `_constGasMin` to make gas consumption independent of the current block number */ function calculateCurrentStakeLimit(StakeLimitState.Data memory _data) internal view returns(uint256 limit) { uint256 stakeLimitIncPerBlock; if (_data.maxStakeLimitGrowthBlocks != 0) { stakeLimitIncPerBlock = _data.maxStakeLimit / _data.maxStakeLimitGrowthBlocks; } uint256 blocksPassed = block.number - _data.prevStakeBlockNumber; uint256 projectedLimit = _data.prevStakeLimit + blocksPassed * stakeLimitIncPerBlock; limit = _constGasMin( projectedLimit, _data.maxStakeLimit ); } /** * @notice update stake limit repr after submitting user's eth * @dev input `_data` param is mutated and the func returns effectively the same pointer * @param _data stake limit state struct * @param _newPrevStakeLimit new value for the `prevStakeLimit` field */ function updatePrevStakeLimit( StakeLimitState.Data memory _data, uint256 _newPrevStakeLimit ) internal view returns (StakeLimitState.Data memory) { assert(_newPrevStakeLimit <= uint96(-1)); assert(_data.prevStakeBlockNumber != 0); _data.prevStakeLimit = uint96(_newPrevStakeLimit); _data.prevStakeBlockNumber = uint32(block.number); return _data; } /** * @notice find a minimum of two numbers with a constant gas consumption * @dev doesn't use branching logic inside * @param _lhs left hand side value * @param _rhs right hand side value */ function _constGasMin(uint256 _lhs, uint256 _rhs) internal pure returns (uint256 min) { uint256 lhsIsLess; assembly { lhsIsLess := lt(_lhs, _rhs) // lhsIsLess = (_lhs < _rhs) ? 1 : 0 } min = (_lhs * lhsIsLess) + (_rhs * (1 - lhsIsLess)); }
 

Shares calculation

Shares to be minted is calcualted based on current total shares and pooled Ethers. For example, if there are total 10 shares and 100 pooled ethers. And Alice stakes 10 ethers, then Alice will get 1 share [ ]
 
Pooled ethers consists of 3 parts:
  • Buffer
    • amount of Ether that users have sent to the Lido smart contract but has not yet been processed by the protocol. It is essentially "waiting" in the Lido contract's balance to be staked on the Consensus Layer (formerly known as the Beacon Chain).
  • Consensus layer balance (CL balance)
    • the total amount of Ether that has already been staked by Lido's validators on the Consensus Layer. It reflects the sum of the balances of all Lido validators that are active and participating in the Ethereum consensus mechanism.
  • Transient balance
    • the Ether that has been sent to the Ethereum 2.0 Deposit Contract via Lido but has not yet been reflected in the CL Balance. This happens because there is a delay between when Ether is sent to the Ethereum 2.0 Deposit Contract and when the corresponding validators are activated and their balances are included in the CL Balance. During this "transient" state, the Ether is effectively locked and "in transit" – it's neither in the Lido smart contract (buffered) nor active on the Consensus Layer. So the transient balance can be calculated as:
//lido/lido-dao/contracts/0.4.24/Lido.sol bytes32 internal constant DEPOSITED_VALIDATORS_POSITION = 0xe6e35175eb53fc006520a2a9c3e9711a7c00de6ff2c32dd31df8c5a24cac1b5c; // keccak256("lido.Lido.depositedValidators"); bytes32 internal constant CL_BALANCE_POSITION = 0xa66d35f054e68143c18f32c990ed5cb972bb68a68f500cd2dd3a16bbf3686483; // keccak256("lido.Lido.beaconBalance"); bytes32 internal constant CL_VALIDATORS_POSITION = 0x9f70001d82b6ef54e9d3725b46581c3eb9ee3aa02b941b6aa54d678a9ca35b10; // keccak256("lido.Lido.beaconValidators"); uint256 private constant DEPOSIT_SIZE = 32 ether; bytes32 internal constant TOTAL_SHARES_POSITION = 0xe3b4b636e601189b5f4c6742edf2538ac12bb61ed03e6da26949d69838fa447e; //keccak256("lido.StETH.totalShares") /** * @return the amount of shares that corresponds to `_ethAmount` protocol-controlled Ether. */ function getSharesByPooledEth(uint256 _ethAmount) public view returns (uint256) { return _ethAmount .mul(_getTotalShares()) .div(_getTotalPooledEther()); } /** * @dev Gets the total amount of Ether controlled by the system * @return total balance in wei */ function _getTotalPooledEther() internal view returns (uint256) { return _getBufferedEther() .add(CL_BALANCE_POSITION.getStorageUint256()) .add(_getTransientBalance()); } /** * @dev Gets the amount of Ether temporary buffered on this contract balance */ function _getBufferedEther() internal view returns (uint256) { return BUFFERED_ETHER_POSITION.getStorageUint256(); } /// @dev Calculates and returns the total base balance (multiple of 32) of validators in transient state, /// i.e. submitted to the official Deposit contract but not yet visible in the CL state. /// @return transient balance in wei (1e-18 Ether) function _getTransientBalance() internal view returns (uint256) { uint256 depositedValidators = DEPOSITED_VALIDATORS_POSITION.getStorageUint256(); uint256 clValidators = CL_VALIDATORS_POSITION.getStorageUint256(); // clValidators can never be less than deposited ones. assert(depositedValidators >= clValidators); return (depositedValidators - clValidators).mul(DEPOSIT_SIZE); } /** * @return the total amount of shares in existence. */ function _getTotalShares() internal view returns (uint256) { return TOTAL_SHARES_POSITION.getStorageUint256(); }
 
After calculating the shares that need to be minted, Lido.sol will mints corresponding shares to the recipient. And update the bufferedEther to reflect the increase of ethers in the pool.
function _mintShares(address _recipient, uint256 _sharesAmount) internal returns (uint256 newTotalShares) { require(_recipient != address(0), "MINT_TO_ZERO_ADDR"); newTotalShares = _getTotalShares().add(_sharesAmount); TOTAL_SHARES_POSITION.setStorageUint256(newTotalShares); shares[_recipient] = shares[_recipient].add(_sharesAmount); }
 
The Lido.sol inherits StETH.sol which implements ERC20 interfaces. But there is some difference between stETH and common ERC20. The biggest difference is the changing of balance. The balance of stETH represents the owned ether in the Lido protocol. But as we have seen, the minted token of staker is essentially shares which reprensents the proportional ownership of the pool owned by stakers. With the reward and penalty generated by validators, pooled ethers will change which results the change of each staker’s stETH balance.
/** * @return the amount of tokens owned by the `_account`. * * @dev Balances are dynamic and equal the `_account`'s share in the amount of the * total Ether controlled by the protocol. See `sharesOf`. */ function balanceOf(address _account) external view returns (uint256) { return getPooledEthByShares(_sharesOf(_account)); } /** * @return the amount of Ether that corresponds to `_sharesAmount` token shares. */ function getPooledEthByShares(uint256 _sharesAmount) public view returns (uint256) { return _sharesAmount .mul(_getTotalPooledEther()) .div(_getTotalShares()); }
The changing of balance property lead to a problem of integration with other protocol. Many exsiting DeFi protocols, such as lending platforms and automated market makers (AMMs), may rely on stable balances for deposited tokens to manage liquidity pools, lending ratios, and reward distributions. The rebase mechanism of stETH can disrupt these protocols' accounting systems, making integration more complex or less efficient. So there is a solution called wstETH(wrapped stETH).
 
Another difference is the transferFrom function. The _amountis to-transfer amount of stEth(Ether), but inside the transferFrom, it in fact calculates and transfer corresponding shares between accounts.
/** * @notice Moves `_amount` tokens from `_sender` to `_recipient` using the */ function transferFrom(address _sender, address _recipient, uint256 _amount) external returns (bool) { _spendAllowance(_sender, msg.sender, _amount); _transfer(_sender, _recipient, _amount); return true; } /** * @notice Moves `_amount` tokens from `_sender` to `_recipient`. */ function _transfer(address _sender, address _recipient, uint256 _amount) internal { uint256 _sharesToTransfer = getSharesByPooledEth(_amount); _transferShares(_sender, _recipient, _sharesToTransfer); _emitTransferEvents(_sender, _recipient, _amount, _sharesToTransfer); } /** * @notice Moves `_sharesAmount` shares from `_sender` to `_recipient`. */ function _transferShares(address _sender, address _recipient, uint256 _sharesAmount) internal { require(_sender != address(0), "TRANSFER_FROM_ZERO_ADDR"); require(_recipient != address(0), "TRANSFER_TO_ZERO_ADDR"); require(_recipient != address(this), "TRANSFER_TO_STETH_CONTRACT"); _whenNotStopped(); uint256 currentSenderShares = shares[_sender]; require(_sharesAmount <= currentSenderShares, "BALANCE_EXCEEDED"); shares[_sender] = currentSenderShares.sub(_sharesAmount); shares[_recipient] = shares[_recipient].add(_sharesAmount); }
 

Wrap (wstETH)

wstETH(wrapped liquidity staked ETH) is used to solve the variable balance problem introduced by stETH.
 

Wrap

Staker can call wstETH.wrap to transfer stETH to the wstETH and exchange for wstETH, the amount of wstETH is in fact the amount of the corresponding shares. Because share is steady, so the balance won’t change.
notion image
/** * @notice Exchanges stETH to wstETH * @param _stETHAmount amount of stETH to wrap in exchange for wstETH * @dev Requirements: */ function wrap(uint256 _stETHAmount) external returns (uint256) { require(_stETHAmount > 0, "wstETH: can't wrap zero stETH"); uint256 wstETHAmount = stETH.getSharesByPooledEth(_stETHAmount); _mint(msg.sender, wstETHAmount); stETH.transferFrom(msg.sender, address(this), _stETHAmount); return wstETHAmount; }
 

Unwrap

User can unwrap their stETH by calling the wstETH.unwrap, the wstETH.sol will transfer the stETH back to the user.
/** * @notice Exchanges wstETH to stETH * @param _wstETHAmount amount of wstETH to uwrap in exchange for stETH * @dev Requirements: * - `_wstETHAmount` must be non-zero * - msg.sender must have at least `_wstETHAmount` wstETH. * @return Amount of stETH user receives after unwrap */ function unwrap(uint256 _wstETHAmount) external returns (uint256) { require(_wstETHAmount > 0, "wstETH: zero amount unwrap not allowed"); uint256 stETHAmount = stETH.getPooledEthByShares(_wstETHAmount); _burn(msg.sender, _wstETHAmount); stETH.transfer(msg.sender, stETHAmount); return stETHAmount; }
Because wstETH’s balance is in fact share which doesn’t change automatically, it is compatible to other protocols.
 

Withdraw

Staker can withdraw their Ethers back. But it’s not immediate, staker should first request the withdraw, the request process is about several days, after that, users can claim their Ethers back.
notion image
 

Request

Staker calls WithdrawalQueueERC721.requestWithdrawalsWithPermit to issue a request. Note that staker can issue one or multiple withdraw requests in a single transaction, but each request’s claim amount has upper bound. WithdrawalQueueERC721 first calls the StETHPermit to approve spend allowence of owner to itself using the signature of owner.
//lido/lido-dao/contracts/0.8.9/WithdrawalQueueERC721.sol /// @notice Request the batch of stETH for withdrawal using EIP-2612 Permit /// @param _amounts an array of stETH amount values /// The standalone withdrawal request will be created for each item in the passed list. /// @param _owner address that will be able to manage the created requests. /// If `address(0)` is passed, `msg.sender` will be used as an owner. /// @param _permit data required for the stETH.permit() method to set the allowance /// @return requestIds an array of the created withdrawal request ids function requestWithdrawalsWithPermit(uint256[] calldata _amounts, address _owner, PermitInput calldata _permit) external returns (uint256[] memory requestIds) { STETH.permit(msg.sender, address(this), _permit.value, _permit.deadline, _permit.v, _permit.r, _permit.s); return requestWithdrawals(_amounts, _owner); }
//lido/lido-dao/contracts/0.4.24/StETHPermit.sol /** * @dev Sets `value` as the allowance of `spender` over ``owner``'s tokens, * given ``owner``'s signed approval. */ function permit( address _owner, address _spender, uint256 _value, uint256 _deadline, uint8 _v, bytes32 _r, bytes32 _s ) external { require(block.timestamp <= _deadline, "DEADLINE_EXPIRED"); bytes32 structHash = keccak256( abi.encode(PERMIT_TYPEHASH, _owner, _spender, _value, _useNonce(_owner), _deadline) ); bytes32 hash = IEIP712StETH(getEIP712StETH()).hashTypedDataV4(address(this), structHash); require(SignatureUtils.isValidSignature(_owner, hash, _v, _r, _s), "INVALID_SIGNATURE"); _approve(_owner, _spender, _value); }
 
After the WithdrawalQueueERC721 has gotten the allowence of the staker, it will create claim requests for the user.
//lido/lido-dao/contracts/0.4.24/StETHPermit.sol /// @notice Request the batch of stETH for withdrawal. Approvals for the passed amounts should be done before. /// @param _amounts an array of stETH amount values. /// The standalone withdrawal request will be created for each item in the passed list. /// @param _owner address that will be able to manage the created requests. /// If `address(0)` is passed, `msg.sender` will be used as owner. /// @return requestIds an array of the created withdrawal request ids function requestWithdrawals(uint256[] calldata _amounts, address _owner) public returns (uint256[] memory requestIds) { //require the withdraw functionality has not been paused _checkResumed(); if (_owner == address(0)) _owner = msg.sender; requestIds = new uint256[](_amounts.length); for (uint256 i = 0; i < _amounts.length; ++i) { //check the withdraw amount is in restricted range _checkWithdrawalRequestAmount(_amounts[i]); requestIds[i] = _requestWithdrawal(_amounts[i], _owner); } } function _checkResumed() internal view { if (isPaused()) { revert ResumedExpected(); } } function _checkWithdrawalRequestAmount(uint256 _amountOfStETH) internal pure { if (_amountOfStETH < MIN_STETH_WITHDRAWAL_AMOUNT) { revert RequestAmountTooSmall(_amountOfStETH); } if (_amountOfStETH > MAX_STETH_WITHDRAWAL_AMOUNT) { revert RequestAmountTooLarge(_amountOfStETH); } }
 
In the _requestWithdrawal function, it first transfer the stETH from msg.sender to itself. (Note that the msg.sender has already approved the usage of stETH to WithdrawalQueueERC721 using STETH.permit function. Then it calls _enqueue to register user’s withdraw request.
//lido/lido-dao/contracts/0.4.24/StETHPermit.sol function _requestWithdrawal(uint256 _amountOfStETH, address _owner) internal returns (uint256 requestId) { STETH.transferFrom(msg.sender, address(this), _amountOfStETH); uint256 amountOfShares = STETH.getSharesByPooledEth(_amountOfStETH); requestId = _enqueue(uint128(_amountOfStETH), uint128(amountOfShares), _owner); _emitTransfer(address(0), _owner, requestId); }
 
Inside the _enqueue, it construct and store the WithdrawalRequest data of current request which is the variable records user’s withdraw request.
Field
Meaning
cumulativeStETH
sum of the all stETH submitted for withdrawals including this request
cumulativeShares
sum of the all shares locked for withdrawal including this request
owner
address that can claim or transfer the request
timestamp
block.timestamp when the request was created
claimed
flag if the request was claimed
reportTimestamp
timestamp of last oracle report for this request
Note that the fields cumulativeStETH and cumulativeShares, there are cumulative, and during the claim operation, Lido uses the difference of two neaby request to calculate the latter request’s corresponding StETH and share amount. The data of the request is stored in a mapping mapping(uint256 => WithdrawalRequest), the key is the request id. And each account has array storage used to store its request ids.
//lido/lido-dao/contracts/0.4.24/StETHPermit.sol /// @dev creates a new `WithdrawalRequest` in the queue /// Emits WithdrawalRequested event function _enqueue(uint128 _amountOfStETH, uint128 _amountOfShares, address _owner) internal returns (uint256 requestId) { uint256 lastRequestId = getLastRequestId(); WithdrawalRequest memory lastRequest = _getQueue()[lastRequestId]; uint128 cumulativeShares = lastRequest.cumulativeShares + _amountOfShares; uint128 cumulativeStETH = lastRequest.cumulativeStETH + _amountOfStETH; requestId = lastRequestId + 1; _setLastRequestId(requestId); WithdrawalRequest memory newRequest = WithdrawalRequest( cumulativeStETH, cumulativeShares, _owner, uint40(block.timestamp), false, uint40(_getLastReportTimestamp()) ); _getQueue()[requestId] = newRequest; assert(_getRequestsByOwner()[_owner].add(requestId)); emit WithdrawalRequested(requestId, msg.sender, _owner, _amountOfStETH, _amountOfShares); } /// @dev last index in request queue bytes32 internal constant LAST_REQUEST_ID_POSITION = keccak256("lido.WithdrawalQueue.lastRequestId"); /// @notice id of the last request function getLastRequestId() public view returns (uint256) { return LAST_REQUEST_ID_POSITION.getStorageUint256(); } /// @dev queue for withdrawal requests, indexes (requestId) start from 1 bytes32 internal constant QUEUE_POSITION = keccak256("lido.WithdrawalQueue.queue"); /// @notice structure representing a request for withdrawal struct WithdrawalRequest { /// @notice sum of the all stETH submitted for withdrawals including this request uint128 cumulativeStETH; /// @notice sum of the all shares locked for withdrawal including this request uint128 cumulativeShares; /// @notice address that can claim or transfer the request address owner; /// @notice block.timestamp when the request was created uint40 timestamp; /// @notice flag if the request was claimed bool claimed; /// @notice timestamp of last oracle report for this request uint40 reportTimestamp; } // Internal getters and setters for unstructured storage function _getQueue() internal pure returns (mapping(uint256 => WithdrawalRequest) storage queue) { bytes32 position = QUEUE_POSITION; assembly { queue.slot := position } } /// @dev timestamp of the last oracle report bytes32 internal constant LAST_REPORT_TIMESTAMP_POSITION = keccak256("lido.WithdrawalQueue.lastReportTimestamp"); function _getLastReportTimestamp() internal view returns (uint256) { return LAST_REPORT_TIMESTAMP_POSITION.getStorageUint256(); } function _getRequestsByOwner() internal pure returns (mapping(address => EnumerableSet.UintSet) storage requestsByOwner) { bytes32 position = REQUEST_BY_OWNER_POSITION; assembly { requestsByOwner.slot := position } }
 
If you have sent withdraw request, you’ll find that there is new Lido withdraw NFTs in your account. I think Lido uses Nft here has two main reasons which I’ll dive deep into the details:
  • Let user see the status of their request clearly through the image of the NFT.(Yes, NFT’s image is related to the status of the withdraw)
  • Faciliate the transfer of withdraw request.
 
The interesting thing is, Lido doesn’t “mint” some token to your account like common NFT protocols. Instead Lido modifies the balanceOf function in the WithdrawalQueueERC721.sol to let balanceOf returns the amount of withdraw requests of the user.
notion image
notion image
//lido/lido-dao/contracts/0.8.9/WithdrawalQueueERC721.sol function balanceOf(address _owner) external view override returns (uint256) { if (_owner == address(0)) revert InvalidOwnerAddress(_owner); return _getRequestsByOwner()[_owner].length(); } function _getRequestsByOwner() internal pure returns (mapping(address => EnumerableSet.UintSet) storage requestsByOwner) { bytes32 position = REQUEST_BY_OWNER_POSITION; assembly { requestsByOwner.slot := position } }
 
You may notice that there are two kinds of Images, this is because the tokenUri is related to the status of the withdraw request.
The tokenUri contains following information:
  • stEth amount to claim at the request time
  • create time(block number)
  • whether claimable and the amount of the claimable stETH.
Here is some unfamiliar words like CheckpointHint and finalized id, I’ll explain them in the claim section.
/// @dev See {IERC721Metadata-tokenURI}. /// @dev If NFTDescriptor address isn't set the `baseURI` would be used for generating erc721 tokenURI. In case /// NFTDescriptor address is set it would be used as a first-priority method. function tokenURI(uint256 _requestId) public view virtual override returns (string memory) { if (!_existsAndNotClaimed(_requestId)) revert InvalidRequestId(_requestId); address nftDescriptorAddress = NFT_DESCRIPTOR_ADDRESS_POSITION.getStorageAddress(); if (nftDescriptorAddress != address(0)) { return INFTDescriptor(nftDescriptorAddress).constructTokenURI(_requestId); } else { return _constructTokenUri(_requestId); } } /// @dev Returns whether `_requestId` exists and not claimed. function _existsAndNotClaimed(uint256 _requestId) internal view returns (bool) { return _requestId > 0 && _requestId <= getLastRequestId() && !_getQueue()[_requestId].claimed; } function _constructTokenUri(uint256 _requestId) internal view returns (string memory) { string memory baseURI = _getBaseURI().value; if (bytes(baseURI).length == 0) return ""; // ${baseUri}/${_requestId}?requested=${amount}&created_at=${timestamp}[&finalized=${claimableAmount}] string memory uri = string( // we have no string.concat in 0.8.9 yet, so we have to do it with bytes.concat bytes.concat( bytes(baseURI), bytes("/"), bytes(_requestId.toString()), bytes("?requested="), bytes( uint256(_getQueue()[_requestId].cumulativeStETH - _getQueue()[_requestId - 1].cumulativeStETH) .toString() ), bytes("&created_at="), bytes(uint256(_getQueue()[_requestId].timestamp).toString()) ) ); bool finalized = _requestId <= getLastFinalizedRequestId(); if (finalized) { uri = string( bytes.concat( bytes(uri), bytes("&finalized="), bytes( _getClaimableEther(_requestId, _findCheckpointHint(_requestId, 1, getLastCheckpointIndex())) .toString() ) ) ); } return uri; }
 
Let’s look at the transfer operation of withdraw request. Inside the WithdrawalQueueERC721, there is a _transfer function implement the transfer logic.
Besides some necessary checks, the transfer process is to fetch the withdraw request of the requestId, confirm the withraw request hasn’t been claimed, move the request id from the from account to the to account.
// lido/lido-dao/contracts/0.8.9/WithdrawalQueueERC721.sol function transferFrom(address _from, address _to, uint256 _requestId) external override { _transfer(_from, _to, _requestId); } /// @dev Transfers `_requestId` from `_from` to `_to`. function _transfer(address _from, address _to, uint256 _requestId) internal { if (_to == address(0)) revert TransferToZeroAddress(); if (_to == _from) revert TransferToThemselves(); if (_requestId == 0 || _requestId > getLastRequestId()) revert InvalidRequestId(_requestId); WithdrawalRequest storage request = _getQueue()[_requestId]; if (request.claimed) revert RequestAlreadyClaimed(_requestId); if (_from != request.owner) revert TransferFromIncorrectOwner(_from, request.owner); // here and below we are sure that `_from` is the owner of the request address msgSender = msg.sender; if ( !(_from == msgSender || isApprovedForAll(_from, msgSender) || _getTokenApprovals()[_requestId] == msgSender) ) { revert NotOwnerOrApproved(msgSender); } delete _getTokenApprovals()[_requestId]; request.owner = _to; assert(_getRequestsByOwner()[_from].remove(_requestId)); assert(_getRequestsByOwner()[_to].add(_requestId)); _emitTransfer(_from, _to, _requestId); } function _getTokenApprovals() internal pure returns (mapping(uint256 => address) storage) { return TOKEN_APPROVALS_POSITION.storageMapUint256Address(); }

Oracle

The reason I describe the oracle first, because only when oracle has published the newest report and authorize the withdraw request can user claim their Ethers back.
There are mainly three parts of the oracle:
  • hash consensus oracle: reach consensus about certain data’s hash
  • validators exit bus oracle: update validator exit information
  • accounting oracle: update rebase, withdraw request finalization and fee distribution, etc.

Hash consensus oracle

Because execution layer and consensus layer are kind of isolated. So off-chain oracle is needed to fetch data from consensus layer and publish necessary information to the execution layer. Such information is quite important which directly effect the rebase, fee distribution and withdraw request finalization, etc. Lido chooses to use a committee of oracles to reach consensus about certain data and then publish it.
HashConsensus.solis used by oracles to reach consensus about hash of certain data. When the consensus has been reached, HashConsensus.solwill submit report to reportProcessor.sol contract which plays a role to handle information update related to the consensus-reached data.
Because there are a committee of oracles, to reach consensus about certain data, oracles need some reference, so that they can reach consensus about the data in specific time.
Lido uses frame and reference slot to handle this:
Time is divided in frames of equal length, each having reference slot and processing deadline. Report data must be gathered by looking at the world state (both Ethereum Consensus and Execution Layers) at the moment of the frame’s reference slot (including any state changes made in that slot), and must be processed before the frame’s processing deadline. Reference slot for each frame is set to the last slot of the epoch preceding the frame's first epoch. The processing deadline is set to the last slot of the last epoch of the frame.
notion image
 
In the above diagram, frame consists of multiple epochs(the number of epochs is configured by Lido), each epoch contains 32 slots.
We can call HashConsensus.getFrameConfig to get the frame setting. Using validatorsExitBusOracle's hashConsensus as an example.
  • initial epoch: 201600
  • epochs per frame: 75
This means that the first frame starts from epoch 201600, and each frame has 75 epochs.
 
Oracles gather information of the reference slot of current frame and calls hashConsensus.submitReportto send their report hashes trying to reach consensus.
//lido/lido-dao/contracts/0.8.9/oracle/HashConsensus.sol /// @notice Used by oracle members to submit hash of the data calculated for the given /// reference slot. /// /// @param slot The reference slot the data was calculated for. Reverts if doesn't match /// the current reference slot. /// /// @param report Hash of the data calculated for the given reference slot. /// /// @param consensusVersion Version of the oracle consensus rules. Reverts if doesn't /// match the version returned by the currently set consensus report processor, /// or zero if no report processor is set. /// function submitReport(uint256 slot, bytes32 report, uint256 consensusVersion) external { _submitReport(slot, report, consensusVersion); }
 
The basic process in the _submitReport are:
  • check the slot provided by oracle is not zero and doesn’t cross the maximum of type uint64.
  • check the report hash provided by oracle is not zero.
  • check the consensus version provided by oracle matches the setting of the hashConsensus
  • msg.sender is a valid member in the oracle committee.
  • check the slot provided by oracle matches the reference slot of the current frame(based on current timestamp)
  • check time hasn’t crossed the data publish deadline of current frame.
  • check whether current time is in the fast lane duration, check the msg.sender is a valid fast lane member
  • check whether the data of the reference slot has been processed #to
  • update the support number of this report hash in the _reportVariantsarray
  • if the support number of some report hash has reached quorum, then hashConsensuscalls reportProcessor.submitConsensusReportto record the report hash
  • if the support number of some report hash has reached quorum, but some oracle resubmits a different report hash, then the previous report hash’s status will change from consensus to non-consensus, and hashConsensuswill call reportProcessor.discardConsensusReportto cancel the consensus on the report hash.
//lido/lido-dao/contracts/0.8.9/oracle/HashConsensus.sol function _submitReport(uint256 slot, bytes32 report, uint256 consensusVersion) internal { //check ref slot is not zero and doesn’t cross the maximum of type uint64. if (slot == 0) revert InvalidSlot(); if (slot > type(uint64).max) revert NumericOverflow(); //check report hash is not zero hash. if (report == ZERO_HASH) revert EmptyReport(); //msg.sender is a valid member in the oracle committee. uint256 memberIndex = _getMemberIndex(_msgSender()); MemberState memory memberState = _memberStates[memberIndex]; //consensus version passed from oracle matches the setting of the hashConsensus uint256 expectedConsensusVersion = _getConsensusVersion(); if (consensusVersion != expectedConsensusVersion) { revert UnexpectedConsensusVersion(expectedConsensusVersion, consensusVersion); } //calculate the current slot and frame related information uint256 timestamp = _getTime(); uint256 currentSlot = _computeSlotAtTimestamp(timestamp); FrameConfig memory config = _frameConfig; ConsensusFrame memory frame = _getFrameAtTimestamp(timestamp, config); //check the slot specified by oracle matches the refSlot of current frame if (slot != frame.refSlot) revert InvalidSlot(); if (currentSlot > frame.reportProcessingDeadlineSlot) revert StaleReport(); //if current time is in the fast lane period, then check the msg.sender //is a fast lane member if (currentSlot <= frame.refSlot + config.fastLaneLengthSlots && !_isFastLaneMember(memberIndex, frame.index) ) { revert NonFastLaneMemberCannotReportWithinFastLaneInterval(); } //check whether the slot specified by the oracle has been submitted by this oracle before if (slot <= _getLastProcessingRefSlot()) { // consensus for the ref.slot was already reached and consensus report is processing if (slot == memberState.lastReportRefSlot) { // member sends a report for the same slot => let them know via a revert revert ConsensusReportAlreadyProcessing(); } else { // member hasn't sent a report for this slot => normal operation, do nothing return; } } //update support number of each slot. uint256 variantsLength; if (_reportingState.lastReportRefSlot != slot) { // first report for a new slot => clear report variants _reportingState.lastReportRefSlot = uint64(slot); variantsLength = 0; } else { variantsLength = _reportVariantsLength; } uint64 varIndex = 0; bool prevConsensusLost = false; while (varIndex < variantsLength && _reportVariants[varIndex].hash != report) { ++varIndex; } //if this oracle has submitted the data of this slot before. //checks whether the data is the same. If is not the same, then update support number if (slot == memberState.lastReportRefSlot) { uint64 prevVarIndex = memberState.lastReportVariantIndex; assert(prevVarIndex < variantsLength); if (varIndex == prevVarIndex) { revert DuplicateReport(); } else { uint256 support = --_reportVariants[prevVarIndex].support; if (support == _quorum - 1) { prevConsensusLost = true; } } } //update support number uint256 support; if (varIndex < variantsLength) { support = ++_reportVariants[varIndex].support; } else { support = 1; _reportVariants[varIndex] = ReportVariant({hash: report, support: 1}); _reportVariantsLength = ++variantsLength; } _memberStates[memberIndex] = MemberState({ lastReportRefSlot: uint64(slot), lastReportVariantIndex: varIndex }); emit ReportReceived(slot, _msgSender(), report); //if consensus has been reached, then calls _reportProcessor.submitConsensusReport //else if the consensus is cancelled, then calls _reportProcessor.discardConsensusReport if (support >= _quorum) { _consensusReached(frame, report, varIndex, support); } else if (prevConsensusLost) { _consensusNotReached(frame); } }
//lido/lido-dao/contracts/0.8.9/oracle/HashConsensus.sol function _consensusReached( ConsensusFrame memory frame, bytes32 report, uint256 variantIndex, uint256 support ) internal { if (_reportingState.lastConsensusRefSlot != frame.refSlot || _reportingState.lastConsensusVariantIndex != variantIndex ) { _reportingState.lastConsensusRefSlot = uint64(frame.refSlot); _reportingState.lastConsensusVariantIndex = uint64(variantIndex); emit ConsensusReached(frame.refSlot, report, support); _submitReportForProcessing(frame, report); } } function _consensusNotReached(ConsensusFrame memory frame) internal { if (_reportingState.lastConsensusRefSlot == frame.refSlot) { _reportingState.lastConsensusRefSlot = 0; emit ConsensusLost(frame.refSlot); _cancelReportProcessing(frame); } } function _submitReportForProcessing(ConsensusFrame memory frame, bytes32 report) internal { IReportAsyncProcessor(_reportProcessor).submitConsensusReport( report, frame.refSlot, _computeTimestampAtSlot(frame.reportProcessingDeadlineSlot) ); } function _cancelReportProcessing(ConsensusFrame memory frame) internal { IReportAsyncProcessor(_reportProcessor).discardConsensusReport(frame.refSlot); }
 
ValidatorsExitBusOracle and AccountingOracleboth have their own hashConsensuscontract and inherit from contract BaseOracle, which defined the functon submitConsensusReport and discardConsensusReportused by hashConsensus to submit or discard consensus report.
 
The basic process in submitConsensusReport:
  • check msg.senderis configured hashConsensuscontract
  • check the refSlot provided by hashConsensusis greater or equal to the previous submitted consensus report’s slot
  • check the refSlotis bigger than the last processing ref slot. Last processing ref slot is the number of slot whose report data has been already processed. This means that report data has been processed can’t be reverted.
  • require the current time hasn’t crossed the submit deadline of the refSlot.
  • check reportHash is not zero
  • store the consensus report
  • calls _handleConsensusReportimplemented by the child contract.
Note that Lido allows the existence of unprocessed ref slot. If such case happens, it will emit the WarnProcessingMissed event.
//lido/lido-dao/contracts/0.8.9/oracle/BaseOracle.sol /// @notice Called by HashConsensus contract to push a consensus report for processing. /// /// Note that submitting the report doesn't require the processor to start processing it right /// away, this can happen later (see `getLastProcessingRefSlot`). Until processing is started, /// HashConsensus is free to reach consensus on another report for the same reporting frame an /// submit it using this same function, or to lose the consensus on the submitted report, /// notifying the processor via `discardConsensusReport`. /// function submitConsensusReport(bytes32 reportHash, uint256 refSlot, uint256 deadline) external { _checkSenderIsConsensusContract(); uint256 prevSubmittedRefSlot = _storageConsensusReport().value.refSlot; if (refSlot < prevSubmittedRefSlot) { revert RefSlotCannotDecrease(refSlot, prevSubmittedRefSlot); } uint256 prevProcessingRefSlot = LAST_PROCESSING_REF_SLOT_POSITION.getStorageUint256(); if (refSlot <= prevProcessingRefSlot) { revert RefSlotMustBeGreaterThanProcessingOne(refSlot, prevProcessingRefSlot); } if (_getTime() > deadline) { revert ProcessingDeadlineMissed(deadline); } if (refSlot != prevSubmittedRefSlot && prevProcessingRefSlot != prevSubmittedRefSlot) { emit WarnProcessingMissed(prevSubmittedRefSlot); } if (reportHash == bytes32(0)) { revert HashCannotBeZero(); } emit ReportSubmitted(refSlot, reportHash, deadline); ConsensusReport memory report = ConsensusReport({ hash: reportHash, refSlot: refSlot.toUint64(), processingDeadlineTime: deadline.toUint64() }); _storageConsensusReport().value = report; _handleConsensusReport(report, prevSubmittedRefSlot, prevProcessingRefSlot); }
 
The basic process in discardConsensusReport:
  • check msg.senderis configured hashConsensuscontract
  • check there are exist consensus report at refSlotpassed by hashConsensus
  • check the report of the refSlothasn’t been processed.
  • clear the hash of the stored consensus report
  • calls _handleConsensusReportimplemented by the child contract.
/// @notice Called by HashConsensus contract to notify that the report for the given ref.slot /// is not a conensus report anymore and should be discarded. This can happen when a member /// changes their report, is removed from the set, or when the quorum value gets increased. /// /// Only called when, for the given reference slot: /// /// 1. there previously was a consensus report; AND /// 1. processing of the consensus report hasn't started yet; AND /// 2. report processing deadline is not expired yet; AND /// 3. there's no consensus report now (otherwise, `submitConsensusReport` is called instead). /// /// Can be called even when there's no submitted non-discarded consensus report for the current /// reference slot, i.e. can be called multiple times in succession. /// function discardConsensusReport(uint256 refSlot) external { _checkSenderIsConsensusContract(); ConsensusReport memory submittedReport = _storageConsensusReport().value; if (refSlot < submittedReport.refSlot) { revert RefSlotCannotDecrease(refSlot, submittedReport.refSlot); } else if (refSlot > submittedReport.refSlot) { return; } uint256 lastProcessingRefSlot = LAST_PROCESSING_REF_SLOT_POSITION.getStorageUint256(); if (refSlot <= lastProcessingRefSlot) { revert RefSlotAlreadyProcessing(); } _storageConsensusReport().value.hash = bytes32(0); _handleConsensusReportDiscarded(submittedReport); emit ReportDiscarded(submittedReport.refSlot, submittedReport.hash); }

Validator exit bus oracle

When the consensus has been reached on hashConsensus, it will calls ValidatorsExitBusOracle.submitReportData to pass and process the detailed report data. (The hashConsensus only reach the consensus on the hash of the report data of certain reference slot, so oracle needs pass the detailed report data here for processing.)
 
Let’s look at the content in the report data.
There are two parts of data:
  • Oracle consensus info : includes the consensus rule and refSlot which are used to check the validity of the report data.
  • Report data : includes requestsCount, dataFormat and data which contains the details of report data.
Lido use abi-encoded bytes to store data. Currently it only supports one kind of data format. The data contains module Id, node operator Id, validator index and validator public key. Note that (moduleId, nodeOpId, validatorIndex) should be ascending, this is used to facilitate the check of duplicated request.
struct ReportData { /// /// Oracle consensus info /// /// @dev Version of the oracle consensus rules. Current version expected /// by the oracle can be obtained by calling getConsensusVersion(). uint256 consensusVersion; /// @dev Reference slot for which the report was calculated. If the slot /// contains a block, the state being reported should include all state /// changes resulting from that block. The epoch containing the slot /// should be finalized prior to calculating the report. uint256 refSlot; /// /// Requests data /// /// @dev Total number of validator exit requests in this report. Must not be greater /// than limit checked in OracleReportSanityChecker.checkExitBusOracleReport. uint256 requestsCount; /// @dev Format of the validator exit requests data. Currently, only the /// DATA_FORMAT_LIST=1 is supported. uint256 dataFormat; /// @dev Validator exit requests data. Can differ based on the data format, /// see the constant defining a specific data format below for more info. bytes data; } /// @notice The list format of the validator exit requests data. Used when all /// requests fit into a single transaction. /// /// Each validator exit request is described by the following 64-byte array: /// /// MSB <------------------------------------------------------- LSB /// | 3 bytes | 5 bytes | 8 bytes | 48 bytes | /// | moduleId | nodeOpId | validatorIndex | validatorPubkey | /// /// All requests are tightly packed into a byte array where requests follow /// one another without any separator or padding, and passed to the `data` /// field of the report structure. /// /// Requests must be sorted in the ascending order by the following compound /// key: (moduleId, nodeOpId, validatorIndex). /// uint256 public constant DATA_FORMAT_LIST = 1;
 
Before handling the report data, submitReportDatawill do some checks first, including:
  • check the caller is a member of the oracle committee and does possess the SUBMIT_DATA_ROLE
  • check the provided contract version is the same as the current one.
  • check the provided consensus version is the same as the expected one.
  • check the provided reference slot is the same as the current consensus frame's one.
  • check the keccak256 hash of the ABI-encoded data is the same as the consensus data’s hash.
  • check the consensus’s report hash is not zero
  • check the processing deadline for the current consensus frame is not missed
  • check the consensus’s report hasn’t been processed yet
After all the checks are passed, it will call _handleConsensusReportData to handle consensus report data
/// lido/lido-dao/contracts/0.8.9/oracle/ValidatorsExitBusOracle.sol /// @notice Submits report data for processing. /// /// @param data The data. See the `ReportData` structure's docs for details. /// @param contractVersion Expected version of the oracle contract. /// /// Reverts if: /// - The caller is not a member of the oracle committee and doesn't possess the /// SUBMIT_DATA_ROLE. /// - The provided contract version is different from the current one. /// - The provided consensus version is different from the expected one. /// - The provided reference slot differs from the current consensus frame's one. /// - The processing deadline for the current consensus frame is missed. /// - The keccak256 hash of the ABI-encoded data is different from the last hash /// provided by the hash consensus contract. /// - The provided data doesn't meet safety checks. /// function submitReportData(ReportData calldata data, uint256 contractVersion) external whenResumed { _checkMsgSenderIsAllowedToSubmitData(); _checkContractVersion(contractVersion); // it's a waste of gas to copy the whole calldata into mem but seems there's no way around _checkConsensusData(data.refSlot, data.consensusVersion, keccak256(abi.encode(data))); _startProcessing(); _handleConsensusReportData(data); }
 
The basic process of handling the report data:
  • check the match of report.dataFormat and configured DATA_FORMAT_LIST.
  • check the length of the report.datais multiple of the PACKED_REQUEST_LENGTH, Because each exit request has exact 64 bytes in the current encoding format.
  • calls oracleReportSanityChecker.checkExitBusOracleReportto check the exit request count doesn’t cross limitation.
  • process the report data which essentially emit event about the information of exit request which includes: module Id, node Operator Id, validator Index, validator public key and timestamp
  • update the process state and total exit request count.
/// lido/lido-dao/contracts/0.8.9/oracle/ValidatorsExitBusOracle.sol function _handleConsensusReportData(ReportData calldata data) internal { //check the match of report.dataFormat and configured DATA_FORMAT_LIST. if (data.dataFormat != DATA_FORMAT_LIST) { revert UnsupportedRequestsDataFormat(data.dataFormat); } //check the length of the report.datais multiple of the PACKED_REQUEST_LENGTH //Because each exit request has exact 64 bytes in the current encoding format. if (data.data.length % PACKED_REQUEST_LENGTH != 0) { revert InvalidRequestsDataLength(); } //check the exit request count doesn’t cross limitation. IOracleReportSanityChecker(LOCATOR.oracleReportSanityChecker()) .checkExitBusOracleReport(data.requestsCount); //check the match between data.length and report.requestsCount if (data.data.length / PACKED_REQUEST_LENGTH != data.requestsCount) { revert UnexpectedRequestsDataLength(); } //process exit request(emit validator to-exit event) _processExitRequestsList(data.data); //update process state _storageDataProcessingState().value = DataProcessingState({ refSlot: data.refSlot.toUint64(), requestsCount: data.requestsCount.toUint64(), requestsProcessed: data.requestsCount.toUint64(), dataFormat: uint16(DATA_FORMAT_LIST) }); if (data.requestsCount == 0) { return; } //update total exit request amount. TOTAL_REQUESTS_PROCESSED_POSITION.setStorageUint256( TOTAL_REQUESTS_PROCESSED_POSITION.getStorageUint256() + data.requestsCount ); }
 
In the _processExitRequestsList, it iterates through the bytes array to decode out each exit request including information of moduleId(module Id), nodeOpId(node operator Id) and valIndex
(validator index).
Then it checks the ascending order of the combination of moduleId, nodeOpId and valIndex to ensure there is no duplicate exit request.
Also it checks the to-exit validator’s index is greater the the previous one’s index to ensure the FIFO(first in first out) rule.
Finally it emit event about exit reques to notify node operator to exit corresponding validator.
Note : In Ethereum 2.0,when validators are activated, they are assigned indexes sequentially. The process ensures that each validator has a unique identifier. For instance, if the current highest validator index is N, the next validator to be activated will receive an index of N+1.
/// lido/lido-dao/contracts/0.8.9/oracle/ValidatorsExitBusOracle.sol function _processExitRequestsList(bytes calldata data) internal { // use offset and offsetPastEnd to track the iteration status of the data. uint256 offset; uint256 offsetPastEnd; // initialize offset and offsetPastEnd. // because calldata "bytes" is viewed as dynamic array. // So the .offset returns the payload's start position not the position of the length. assembly { offset := data.offset offsetPastEnd := add(offset, data.length) } uint256 lastDataWithoutPubkey = 0; uint256 lastNodeOpKey = 0; RequestedValidator memory lastRequestedVal; bytes calldata pubkey; assembly { pubkey.length := 48 } uint256 timestamp = _getTime(); while (offset < offsetPastEnd) { uint256 dataWithoutPubkey; assembly { // 16 most significant bytes are taken by module id, node op id, and val index dataWithoutPubkey := shr(128, calldataload(offset)) // the next 48 bytes are taken by the pubkey pubkey.offset := add(offset, 16) // totalling to 64 bytes offset := add(offset, 64) } // dataWithoutPubkey // MSB <---------------------------------------------------------------------- LSB // | 128 bits: zeros | 24 bits: moduleId | 40 bits: nodeOpId | 64 bits: valIndex | // // ensure there is no duplicate exit requests if (dataWithoutPubkey <= lastDataWithoutPubkey) { revert InvalidRequestsDataSortOrder(); } uint64 valIndex = uint64(dataWithoutPubkey); uint256 nodeOpId = uint40(dataWithoutPubkey >> 64); uint256 moduleId = uint24(dataWithoutPubkey >> (64 + 40)); if (moduleId == 0) { revert InvalidRequestsData(); } uint256 nodeOpKey = _computeNodeOpKey(moduleId, nodeOpId); if (nodeOpKey != lastNodeOpKey) { if (lastNodeOpKey != 0) { _storageLastRequestedValidatorIndices()[lastNodeOpKey] = lastRequestedVal; } // get the last request value(requested and validator index) of the node operator // this is used to check that the exit order of validators of node operator follows FIFO rule. lastRequestedVal = _storageLastRequestedValidatorIndices()[nodeOpKey]; lastNodeOpKey = nodeOpKey; } // check FIFO rule if (lastRequestedVal.requested && valIndex <= lastRequestedVal.index) { revert NodeOpValidatorIndexMustIncrease( moduleId, nodeOpId, lastRequestedVal.index, valIndex ); } lastRequestedVal = RequestedValidator(true, valIndex); lastDataWithoutPubkey = dataWithoutPubkey; // emit event to notify node operator to exit corresponding validator emit ValidatorExitRequest(moduleId, nodeOpId, valIndex, pubkey, timestamp); } if (lastNodeOpKey != 0) { // update lastRequestedVal of the node operator. _storageLastRequestedValidatorIndices()[lastNodeOpKey] = lastRequestedVal; } }

Accounting oracle

When the consensus has been reached on hashConsensus, it will calls AccountingOracle.submitReportData to pass and process the detailed report data.
Let’s look at the content in the report data. There are 4 parts:
  • Oracle consensus info : includes the consensus rule and refSlot which are used to check the validity of the report data.
  • Consensus layer info : includes cumulative validators and the ether balance on the consensus layer. Also the validator exit status of each staking module.
  • Execution layer info: includes balance of withdrawal vault and execution layer reward vault. Withdrawal vault accepts stake rewards and ethers from the exited validator. Execution layer reward vault accepts gas in proposed block. Also there is information of the sharesRequestedToBurn, which is the shares amount requested to burn through Burner as observed at the reference slot.
  • Decision: withdrawalFinalizationBatches includes information of withdraw request ids to finalize. simulatedShareRate is the reference share rate used to finalize withdraw requests. isBunkerMode is the decision whether to start bunker mode. extraDataFormat, extraDataHash and extraDataItemsCount are used to specify the extra report data like exited/stuck validators counts.
///lido/lido-dao/contracts/0.8.9/oracle/AccountingOracle.sol struct ReportData { /// /// Oracle consensus info /// /// @dev Version of the oracle consensus rules. Current version expected /// by the oracle can be obtained by calling getConsensusVersion(). uint256 consensusVersion; /// @dev Reference slot for which the report was calculated. If the slot /// contains a block, the state being reported should include all state /// changes resulting from that block. The epoch containing the slot /// should be finalized prior to calculating the report. uint256 refSlot; /// /// CL values /// /// @dev The number of validators on consensus layer that were ever deposited /// via Lido as observed at the reference slot. uint256 numValidators; /// @dev Cumulative balance of all Lido validators on the consensus layer /// as observed at the reference slot. uint256 clBalanceGwei; /// @dev Ids of staking modules that have more exited validators than the number /// stored in the respective staking module contract as observed at the reference /// slot. uint256[] stakingModuleIdsWithNewlyExitedValidators; /// @dev Number of ever exited validators for each of the staking modules from /// the stakingModuleIdsWithNewlyExitedValidators array as observed at the /// reference slot. uint256[] numExitedValidatorsByStakingModule; /// /// EL values /// /// @dev The ETH balance of the Lido withdrawal vault as observed at the reference slot. uint256 withdrawalVaultBalance; /// @dev The ETH balance of the Lido execution layer rewards vault as observed /// at the reference slot. uint256 elRewardsVaultBalance; /// @dev The shares amount requested to burn through Burner as observed /// at the reference slot. The value can be obtained in the following way: /// `(coverSharesToBurn, nonCoverSharesToBurn) = IBurner(burner).getSharesRequestedToBurn() /// sharesRequestedToBurn = coverSharesToBurn + nonCoverSharesToBurn` uint256 sharesRequestedToBurn; /// /// Decision /// /// @dev The ascendingly-sorted array of withdrawal request IDs obtained by calling /// WithdrawalQueue.calculateFinalizationBatches. Empty array means that no withdrawal /// requests should be finalized. uint256[] withdrawalFinalizationBatches; /// @dev The share/ETH rate with the 10^27 precision (i.e. the price of one stETH share /// in ETH where one ETH is denominated as 10^27) that would be effective as the result of /// applying this oracle report at the reference slot, with withdrawalFinalizationBatches /// set to empty array and simulatedShareRate set to 0. uint256 simulatedShareRate; /// @dev Whether, based on the state observed at the reference slot, the protocol should /// be in the bunker mode. bool isBunkerMode; /// /// Extra data — the oracle information that allows asynchronous processing, potentially in /// chunks, after the main data is processed. The oracle doesn't enforce that extra data /// attached to some data report is processed in full before the processing deadline expires /// or a new data report starts being processed, but enforces that no processing of extra /// data for a report is possible after its processing deadline passes or a new data report /// arrives. /// /// Extra data is an array of items, each item being encoded as follows: /// /// 3 bytes 2 bytes X bytes /// | itemIndex | itemType | itemPayload | /// /// itemIndex is a 0-based index into the extra data array; /// itemType is the type of extra data item; /// itemPayload is the item's data which interpretation depends on the item's type. /// /// Items should be sorted ascendingly by the (itemType, ...itemSortingKey) compound key /// where `itemSortingKey` calculation depends on the item's type (see below). /// /// ---------------------------------------------------------------------------------------- /// /// itemType=0 (EXTRA_DATA_TYPE_STUCK_VALIDATORS): stuck validators by node operators. /// itemPayload format: /// /// | 3 bytes | 8 bytes | nodeOpsCount * 8 bytes | nodeOpsCount * 16 bytes | /// | moduleId | nodeOpsCount | nodeOperatorIds | stuckValidatorsCounts | /// /// moduleId is the staking module for which exited keys counts are being reported. /// /// nodeOperatorIds contains an array of ids of node operators that have total stuck /// validators counts changed compared to the staking module smart contract storage as /// observed at the reference slot. Each id is a 8-byte uint, ids are packed tightly. /// /// nodeOpsCount contains the number of node operator ids contained in the nodeOperatorIds /// array. Thus, nodeOpsCount = byteLength(nodeOperatorIds) / 8. /// /// stuckValidatorsCounts contains an array of stuck validators total counts, as observed at /// the reference slot, for the node operators from the nodeOperatorIds array, in the same /// order. Each count is a 16-byte uint, counts are packed tightly. Thus, /// byteLength(stuckValidatorsCounts) = nodeOpsCount * 16. /// /// nodeOpsCount must not be greater than maxAccountingExtraDataListItemsCount specified /// in OracleReportSanityChecker contract. If a staking module has more node operators /// with total stuck validators counts changed compared to the staking module smart contract /// storage (as observed at the reference slot), reporting for that module should be split /// into multiple items. /// /// Item sorting key is a compound key consisting of the module id and the first reported /// node operator's id: /// /// itemSortingKey = (moduleId, nodeOperatorIds[0:8]) /// /// ---------------------------------------------------------------------------------------- /// /// itemType=1 (EXTRA_DATA_TYPE_EXITED_VALIDATORS): exited validators by node operators. /// /// The payload format is exactly the same as for itemType=EXTRA_DATA_TYPE_STUCK_VALIDATORS, /// except that, instead of stuck validators counts, exited validators counts are reported. /// The `itemSortingKey` is calculated identically. /// /// ---------------------------------------------------------------------------------------- /// /// The oracle daemon should report exited/stuck validators counts ONLY for those /// (moduleId, nodeOperatorId) pairs that contain outdated counts in the staking /// module smart contract as observed at the reference slot. /// /// Extra data array can be passed in different formats, see below. /// /// @dev Format of the extra data. /// /// Currently, only the EXTRA_DATA_FORMAT_EMPTY=0 and EXTRA_DATA_FORMAT_LIST=1 /// formats are supported. See the constant defining a specific data format for /// more info. /// uint256 extraDataFormat; /// @dev Hash of the extra data. See the constant defining a specific extra data /// format for the info on how to calculate the hash. /// /// Must be set to a zero hash if the oracle report contains no extra data. /// bytes32 extraDataHash; /// @dev Number of the extra data items. /// /// Must be set to zero if the oracle report contains no extra data. /// uint256 extraDataItemsCount; }
 
simulated share rate calculation
Note that Lido uses simulatedShareRate to finalize withdraw requests, this share rate is calculated on the corresponding reference slot of the report, assuming there is no withdraw requests, considering the rebase limit.
Using such calcualted share rate, oracle can decide the withdraw requests to finalize based on usable ether amount.
simulatedShareRate is a reference, the actual share rate of withdraw request can be divided into two scenarios:
  • discounted: if the registered share rate of withdraw request(calculated when the withdraw requests are registered) is greater than the simulatedShareRate, uses the simulatedShareRate insteaed.
  • normal: if the registered share rate of withdraw request(calculated when the withdraw requests are registered) is smaller than the simulatedShareRate, use the registered share rate.
 
To calculate simulatedShareRate, oracle use eth_call to call Lido.handleOracleReport to get it and get amount of ether that can be withdrawn from Withdrawal and Execution Layer Rewards Vaults taking into account the limits. Reference
Note that elRewardsVaultBalance should be set 0, withdrawalFinalizationBatches should be set [], and simulatedShareRate should be set 0.
 
Before handling the report data, submitReportDatawill do some checks first, including:
  • check the caller is a member of the oracle committee and does possess the SUBMIT_DATA_ROLE.
  • check the provided contract version is the same as the current one.
  • check the provided consensus version is the same as the expected one.
  • check the provided reference slot is the same as the current consensus frame's one.
  • check the keccak256 hash of the ABI-encoded data is the same as the consensus data’s hash.
  • check the consensus’s report hash is not zero.
  • check the processing deadline for the current consensus frame is not missed.
  • check the consensus’s report hasn’t been processed yet.
///lido/lido-dao/contracts/0.8.9/oracle/AccountingOracle.sol /// @notice Submits report data for processing. /// /// @param data The data. See the `ReportData` structure's docs for details. /// @param contractVersion Expected version of the oracle contract. /// /// Reverts if: /// - The caller is not a member of the oracle committee and doesn't possess the /// SUBMIT_DATA_ROLE. /// - The provided contract version is different from the current one. /// - The provided consensus version is different from the expected one. /// - The provided reference slot differs from the current consensus frame's one. /// - The processing deadline for the current consensus frame is missed. /// - The keccak256 hash of the ABI-encoded data is different from the last hash /// provided by the hash consensus contract. /// - The provided data doesn't meet safety checks. /// function submitReportData(ReportData calldata data, uint256 contractVersion) external { _checkMsgSenderIsAllowedToSubmitData(); _checkContractVersion(contractVersion); _checkConsensusData(data.refSlot, data.consensusVersion, keccak256(abi.encode(data))); uint256 prevRefSlot = _startProcessing(); _handleConsensusReportData(data, prevRefSlot); }
 
After checks, it calls to handle report data:
  • check the validity and consistency of data.extraDataFormatdata.extraDataItemsCount and data.extraDataHash
  • check data.extraDataItemsCountdoesn’t cross the limit
  • calls LEGACY_ORACLE to update the last completed epoch id and emit event
  • update validator exit information in StakingRouter
  • update the status of bunker mode(active or not) in the WithdrawalQueueERC721
/// lido/lido-dao/contracts/0.8.9/oracle/AccountingOracle.sol function _handleConsensusReportData(ReportData calldata data, uint256 prevRefSlot) internal { // check the validity and consistency of data.extraDataFormat,data.extraDataItemsCount and data.extraDataHash if (data.extraDataFormat == EXTRA_DATA_FORMAT_EMPTY) { if (data.extraDataHash != bytes32(0)) { revert UnexpectedExtraDataHash(bytes32(0), data.extraDataHash); } if (data.extraDataItemsCount != 0) { revert UnexpectedExtraDataItemsCount(0, data.extraDataItemsCount); } } else { if (data.extraDataFormat != EXTRA_DATA_FORMAT_LIST) { revert UnsupportedExtraDataFormat(data.extraDataFormat); } if (data.extraDataItemsCount == 0) { revert ExtraDataItemsCountCannotBeZeroForNonEmptyData(); } if (data.extraDataHash == bytes32(0)) { revert ExtraDataHashCannotBeZeroForNonEmptyData(); } } // check data.extraDataItemsCount doesn't cross limit IOracleReportSanityChecker(LOCATOR.oracleReportSanityChecker()) .checkAccountingExtraDataListItemsCount(data.extraDataItemsCount); // calls LEGACY_ORACLE to update last completed epoch id and emit event ILegacyOracle(LEGACY_ORACLE).handleConsensusLayerReport( data.refSlot, data.clBalanceGwei * 1e9, data.numValidators ); uint256 slotsElapsed = data.refSlot - prevRefSlot; //get contract address from locator IStakingRouter stakingRouter = IStakingRouter(LOCATOR.stakingRouter()); IWithdrawalQueue withdrawalQueue = IWithdrawalQueue(LOCATOR.withdrawalQueue()); // update validator exit information in StakingRouter _processStakingRouterExitedValidatorsByModule( stakingRouter, data.stakingModuleIdsWithNewlyExitedValidators, data.numExitedValidatorsByStakingModule, slotsElapsed ); // update the status of bunker mode(active or not) in the WithdrawalQueueERC721 withdrawalQueue.onOracleReport( data.isBunkerMode, GENESIS_TIME + prevRefSlot * SECONDS_PER_SLOT, GENESIS_TIME + data.refSlot * SECONDS_PER_SLOT ); // Updates accounting stats, collects EL rewards and distributes collected rewards if beacon balance increased, performs withdrawal requests finalization ILido(LIDO).handleOracleReport( GENESIS_TIME + data.refSlot * SECONDS_PER_SLOT, slotsElapsed * SECONDS_PER_SLOT, data.numValidators, data.clBalanceGwei * 1e9, data.withdrawalVaultBalance, data.elRewardsVaultBalance, data.sharesRequestedToBurn, data.withdrawalFinalizationBatches, data.simulatedShareRate ); // update ExtraDataProcessingState, store the extraData's related information for later extraData report usage _storageExtraDataProcessingState().value = ExtraDataProcessingState({ refSlot: data.refSlot.toUint64(), dataFormat: data.extraDataFormat.toUint16(), submitted: false, dataHash: data.extraDataHash, itemsCount: data.extraDataItemsCount.toUint16(), itemsProcessed: 0, lastSortingKey: 0 }); }

LEGACY_ORACLE.handleConsensusLayerReport

update the last epoch accepted oracle report
///lido/lido-dao/contracts/0.4.24/oracle/LegacyOracle.sol function _getChainSpec() internal view returns (ChainSpec memory chainSpec) { uint256 data = BEACON_SPEC_POSITION.getStorageUint256(); chainSpec.epochsPerFrame = uint64(data >> 192); chainSpec.slotsPerEpoch = uint64(data >> 128); chainSpec.secondsPerSlot = uint64(data >> 64); chainSpec.genesisTime = uint64(data); return chainSpec; } /** * @notice Called by the new accounting oracle on each report. */ function handleConsensusLayerReport(uint256 _refSlot, uint256 _clBalance, uint256 _clValidators) external { require(msg.sender == getAccountingOracle(), "SENDER_NOT_ALLOWED"); // new accounting oracle's ref. slot is the last slot of the epoch preceding the one the frame starts at uint256 epochId = (_refSlot + 1) / _getChainSpec().slotsPerEpoch; LAST_COMPLETED_EPOCH_ID_POSITION.setStorageUint256(epochId); emit Completed(epochId, uint128(_clBalance), uint128(_clValidators)); }

_processStakingRouterExitedValidatorsByModule

handle validator exit related information:
  • check length of stakingModuleIds and numExitedValidatorsByStakingModule is same
  • check length of stakingModuleIds is not zero
  • check the stakingModuleIds array are ascending to avoid duplicate
  • check none of the exited validator amount is zero
  • calls stakingRouter to update exited validator information
  • calculate the rate of exited validators per data
  • check the validator exit rate per day doesn't cross the limit
///lido/lido-dao/contracts/0.8.9/oracle/AccountingOracle.sol function _processStakingRouterExitedValidatorsByModule( IStakingRouter stakingRouter, uint256[] calldata stakingModuleIds, uint256[] calldata numExitedValidatorsByStakingModule, uint256 slotsElapsed ) internal { //check length of stakingModuleIds and numExitedValidatorsByStakingModule is same if (stakingModuleIds.length != numExitedValidatorsByStakingModule.length) { revert InvalidExitedValidatorsData(); } //check length of stakingModuleIds is not zero if (stakingModuleIds.length == 0) { return; } //check the stakingModuleIds array are ascending to avoid duplicate. for (uint256 i = 1; i < stakingModuleIds.length;) { if (stakingModuleIds[i] <= stakingModuleIds[i - 1]) { revert InvalidExitedValidatorsData(); } unchecked { ++i; } } //check none of the exited validator amount is zero for (uint256 i = 0; i < stakingModuleIds.length;) { if (numExitedValidatorsByStakingModule[i] == 0) { revert InvalidExitedValidatorsData(); } unchecked { ++i; } } //calls stakingRouter to update exited validator information uint256 newlyExitedValidatorsCount = stakingRouter.updateExitedValidatorsCountByStakingModule( stakingModuleIds, numExitedValidatorsByStakingModule ); //calculate the rate of exited validators per data uint256 exitedValidatorsRatePerDay = newlyExitedValidatorsCount * (1 days) / (SECONDS_PER_SLOT * slotsElapsed); //check the validator exit rate per day doesn't cross the limit IOracleReportSanityChecker(LOCATOR.oracleReportSanityChecker()) .checkExitedValidatorsRatePerDay(exitedValidatorsRatePerDay); }
 
stakingRouter.updateExitedValidatorsCountByStakingModule updates the exited validators count inside the stakingRouter :
  • check the length of _stakingModuleIds and _exitedValidatorsCounts are equal
  • initialize a variable newlyExitedValidatorsCount to store the newly exited validators amount
  • get data of the stakingModule indexed using id
  • get previous exited validator count reported by AccountingOracle
  • check the exit amount does increase
  • get the totalExitedValidators and totalDepositedValidators from StakingModule.
  • check that total exited validator amount can't exceed total deposited validator amount
  • cumulate newly exited validator amount
  • if totalExitedValidators is smaller than prevReportedExitedValidatorsCount which means that data update in StakingModule has delay. Not all of the exited validators were async reported to the module, so emit a event to record and notify.
  • update exitedValidatorsCount in stakingRouter ’s storage stakingModule
struct StakingModule { /// @notice unique id of the staking module uint24 id; /// @notice address of staking module address stakingModuleAddress; /// @notice part of the fee taken from staking rewards that goes to the staking module uint16 stakingModuleFee; /// @notice part of the fee taken from staking rewards that goes to the treasury uint16 treasuryFee; /// @notice target percent of total validators in protocol, in BP uint16 targetShare; /// @notice staking module status if staking module can not accept the deposits or can participate in further reward distribution uint8 status; /// @notice name of staking module string name; /// @notice block.timestamp of the last deposit of the staking module /// @dev NB: lastDepositAt gets updated even if the deposit value was 0 and no actual deposit happened uint64 lastDepositAt; /// @notice block.number of the last deposit of the staking module /// @dev NB: lastDepositBlock gets updated even if the deposit value was 0 and no actual deposit happened uint256 lastDepositBlock; /// @notice number of exited validators uint256 exitedValidatorsCount; } /// @notice Updates total numbers of exited validators for staking modules with the specified /// module ids. /// /// @param _stakingModuleIds Ids of the staking modules to be updated. /// @param _exitedValidatorsCounts New counts of exited validators for the specified staking modules. /// /// @return The total increase in the aggregate number of exited validators across all updated modules. /// /// The total numbers are stored in the staking router and can differ from the totals obtained by calling /// `IStakingModule.getStakingModuleSummary()`. The overall process of updating validator counts is the following: /// /// 1. In the first data submission phase, the oracle calls `updateExitedValidatorsCountByStakingModule` on the /// staking router, passing the totals by module. The staking router stores these totals and uses them to /// distribute new stake and staking fees between the modules. There can only be single call of this function /// per oracle reporting frame. /// /// 2. In the first part of the second data submission phase, the oracle calls /// `StakingRouter.reportStakingModuleStuckValidatorsCountByNodeOperator` on the staking router which passes the /// counts by node operator to the staking module by calling `IStakingModule.updateStuckValidatorsCount`. /// This can be done multiple times for the same module, passing data for different subsets of node operators. /// /// 3. In the second part of the second data submission phase, the oracle calls /// `StakingRouter.reportStakingModuleExitedValidatorsCountByNodeOperator` on the staking router which passes /// the counts by node operator to the staking module by calling `IStakingModule.updateExitedValidatorsCount`. /// This can be done multiple times for the same module, passing data for different subsets of node /// operators. /// /// 4. At the end of the second data submission phase, it's expected for the aggregate exited validators count /// across all module's node operators (stored in the module) to match the total count for this module /// (stored in the staking router). However, it might happen that the second phase of data submission doesn't /// finish until the new oracle reporting frame is started, in which case staking router will emit a warning /// event `StakingModuleExitedValidatorsIncompleteReporting` when the first data submission phase is performed /// for a new reporting frame. This condition will result in the staking module having an incomplete data about /// the exited and maybe stuck validator counts during the whole reporting frame. Handling this condition is /// the responsibility of each staking module. /// /// 5. When the second reporting phase is finished, i.e. when the oracle submitted the complete data on the stuck /// and exited validator counts per node operator for the current reporting frame, the oracle calls /// `StakingRouter.onValidatorsCountsByNodeOperatorReportingFinished` which, in turn, calls /// `IStakingModule.onExitedAndStuckValidatorsCountsUpdated` on all modules. /// function updateExitedValidatorsCountByStakingModule( uint256[] calldata _stakingModuleIds, uint256[] calldata _exitedValidatorsCounts ) external onlyRole(REPORT_EXITED_VALIDATORS_ROLE) returns (uint256) { //check the length of _stakingModuleIds and _exitedValidatorsCounts are equal if (_stakingModuleIds.length != _exitedValidatorsCounts.length) { revert ArraysLengthMismatch(_stakingModuleIds.length, _exitedValidatorsCounts.length); } //initialize a variable to store the newly exited validators amount uint256 newlyExitedValidatorsCount; for (uint256 i = 0; i < _stakingModuleIds.length; ) { uint256 stakingModuleId = _stakingModuleIds[i]; //get data of the stakingModule indexed using id StakingModule storage stakingModule = _getStakingModuleById(stakingModuleId); //get previous exited validator count reported by AccountingOracle uint256 prevReportedExitedValidatorsCount = stakingModule.exitedValidatorsCount; //check the exit amount does increase if (_exitedValidatorsCounts[i] < prevReportedExitedValidatorsCount) { revert ExitedValidatorsCountCannotDecrease(); } //get the totalExitedValidators and totalDepositedValidators from StakingModule. ( uint256 totalExitedValidators, uint256 totalDepositedValidators, /* uint256 depositableValidatorsCount */ ) = IStakingModule(stakingModule.stakingModuleAddress).getStakingModuleSummary(); //check that total exited validator amount can't exceed total deposited validator amount if (_exitedValidatorsCounts[i] > totalDepositedValidators) { revert ReportedExitedValidatorsExceedDeposited( _exitedValidatorsCounts[i], totalDepositedValidators ); } //cumulate newly exited validator amount newlyExitedValidatorsCount += _exitedValidatorsCounts[i] - prevReportedExitedValidatorsCount; //if totalExitedValidators is smaller than prevReportedExitedValidatorsCount which means that data update in StakingModule has delay if (totalExitedValidators < prevReportedExitedValidatorsCount) { // not all of the exited validators were async reported to the module emit StakingModuleExitedValidatorsIncompleteReporting( stakingModuleId, prevReportedExitedValidatorsCount - totalExitedValidators ); } //update exitedValidatorsCount in storage stakingModule stakingModule.exitedValidatorsCount = _exitedValidatorsCounts[i]; unchecked { ++i; } } return newlyExitedValidatorsCount; }

withdrawalQueue.onOracleReport

update bunker mode based on report
/// lido/lido-dao/contracts/0.8.9/WithdrawalQueueERC721.sol /// @notice Update bunker mode state and last report timestamp on oracle report /// @dev should be called by oracle /// /// @param _isBunkerModeNow is bunker mode reported by oracle /// @param _bunkerStartTimestamp timestamp of start of the bunker mode /// @param _currentReportTimestamp timestamp of the current report ref slot function onOracleReport(bool _isBunkerModeNow, uint256 _bunkerStartTimestamp, uint256 _currentReportTimestamp) external { _checkRole(ORACLE_ROLE, msg.sender); if (_bunkerStartTimestamp >= block.timestamp) revert InvalidReportTimestamp(); if (_currentReportTimestamp >= block.timestamp) revert InvalidReportTimestamp(); _setLastReportTimestamp(_currentReportTimestamp); bool isBunkerModeWasSetBefore = isBunkerModeActive(); // on bunker mode state change if (_isBunkerModeNow != isBunkerModeWasSetBefore) { // write previous timestamp to enable bunker or max uint to disable // #to: why use previous timestamp not current timestamp? if (_isBunkerModeNow) { BUNKER_MODE_SINCE_TIMESTAMP_POSITION.setStorageUint256(_bunkerStartTimestamp); emit BunkerModeEnabled(_bunkerStartTimestamp); } else { BUNKER_MODE_SINCE_TIMESTAMP_POSITION.setStorageUint256(BUNKER_MODE_DISABLED_TIMESTAMP); emit BunkerModeDisabled(); } } } /// @notice Check if bunker mode is active function isBunkerModeActive() public view returns (bool) { return bunkerModeSinceTimestamp() < BUNKER_MODE_DISABLED_TIMESTAMP; } /// Special value for timestamp when bunker mode is inactive (i.e., protocol in turbo mode) uint256 public constant BUNKER_MODE_DISABLED_TIMESTAMP = type(uint256).max;
 

handleOracleReport

Let’s dive deep into the Lido.handleOracleReport:
  • check Lido contract hasn’t been stopped
  • check msg.sender is the registered accountingOracle
  • check the report timestamp is before current block time
  • Take a snapshot of the current (pre-) state: preTotalPooledEther, preTotalShares, preCLValidators, preCLBalance
  • Pass the report data to sanity checker (reverts if malformed)
  • calculate the ether to be locked for withdrawal queue and shares to be burnt
  • request to burn shares
  • calculate the ethers need to be transferred from withdraw vault and execution layer reward vault considering the rebase limit. calculate the simulated share rate without withdraw.
  • collect reward and finalize withdraw requests.
  • commit to burn shares and burn shares
  • distribute protocol rewards to modules and treasury
  • execute post-rebase logic if exists
  • check the simulated share rate in the report is similar to the calculated simulated share rate in the transaction.
/** * @notice Updates accounting stats, collects EL rewards and distributes collected rewards * if beacon balance increased, performs withdrawal requests finalization * @dev periodically called by the AccountingOracle contract * * @param _reportTimestamp the moment of the oracle report calculation * @param _timeElapsed seconds elapsed since the previous report calculation * @param _clValidators number of Lido validators on Consensus Layer * @param _clBalance sum of all Lido validators' balances on Consensus Layer * @param _withdrawalVaultBalance withdrawal vault balance on Execution Layer at `_reportTimestamp` * @param _elRewardsVaultBalance elRewards vault balance on Execution Layer at `_reportTimestamp` * @param _sharesRequestedToBurn shares requested to burn through Burner at `_reportTimestamp` * @param _withdrawalFinalizationBatches the ascendingly-sorted array of withdrawal request IDs obtained by calling * WithdrawalQueue.calculateFinalizationBatches. Empty array means that no withdrawal requests should be finalized * @param _simulatedShareRate share rate that was simulated by oracle when the report data created (1e27 precision) * * NB: `_simulatedShareRate` should be calculated off-chain by calling the method with `eth_call` JSON-RPC API * while passing empty `_withdrawalFinalizationBatches` and `_simulatedShareRate` == 0, plugging the returned values * to the following formula: `_simulatedShareRate = (postTotalPooledEther * 1e27) / postTotalShares` * * @return postRebaseAmounts[0]: `postTotalPooledEther` amount of ether in the protocol after report * @return postRebaseAmounts[1]: `postTotalShares` amount of shares in the protocol after report * @return postRebaseAmounts[2]: `withdrawals` withdrawn from the withdrawals vault * @return postRebaseAmounts[3]: `elRewards` withdrawn from the execution layer rewards vault */ function handleOracleReport( // Oracle timings uint256 _reportTimestamp, uint256 _timeElapsed, // CL values uint256 _clValidators, uint256 _clBalance, // EL values uint256 _withdrawalVaultBalance, uint256 _elRewardsVaultBalance, uint256 _sharesRequestedToBurn, // Decision about withdrawals processing uint256[] _withdrawalFinalizationBatches, uint256 _simulatedShareRate ) external returns (uint256[4] postRebaseAmounts) { _whenNotStopped(); return _handleOracleReport( OracleReportedData( _reportTimestamp, _timeElapsed, _clValidators, _clBalance, _withdrawalVaultBalance, _elRewardsVaultBalance, _sharesRequestedToBurn, _withdrawalFinalizationBatches, _simulatedShareRate ) ); } /** * @dev Handle oracle report method operating with the data-packed structs * Using structs helps to overcome 'stack too deep' issue. * * The method updates the protocol's accounting state. * Key steps: * 1. Take a snapshot of the current (pre-) state * 2. Pass the report data to sanity checker (reverts if malformed) * 3. Pre-calculate the ether to lock for withdrawal queue and shares to be burnt * 4. Pass the accounting values to sanity checker to smoothen positive token rebase * (i.e., postpone the extra rewards to be applied during the next rounds) * 5. Invoke finalization of the withdrawal requests * 6. Burn excess shares within the allowed limit (can postpone some shares to be burnt later) * 7. Distribute protocol fee (treasury & node operators) * 8. Complete token rebase by informing observers (emit an event and call the external receivers if any) * 9. Sanity check for the provided simulated share rate */ function _handleOracleReport(OracleReportedData memory _reportedData) internal returns (uint256[4]) { OracleReportContracts memory contracts = _loadOracleReportContracts(); require(msg.sender == contracts.accountingOracle, "APP_AUTH_FAILED"); require(_reportedData.reportTimestamp <= block.timestamp, "INVALID_REPORT_TIMESTAMP"); OracleReportContext memory reportContext; // Step 1. // Take a snapshot of the current (pre-) state reportContext.preTotalPooledEther = _getTotalPooledEther(); reportContext.preTotalShares = _getTotalShares(); reportContext.preCLValidators = CL_VALIDATORS_POSITION.getStorageUint256(); reportContext.preCLBalance = _processClStateUpdate( _reportedData.reportTimestamp, reportContext.preCLValidators, _reportedData.clValidators, _reportedData.postCLBalance ); // Step 2. // Pass the report data to sanity checker (reverts if malformed) _checkAccountingOracleReport(contracts, _reportedData, reportContext); // Step 3. // Pre-calculate the ether to lock for withdrawal queue and shares to be burnt // due to withdrawal requests to finalize if (_reportedData.withdrawalFinalizationBatches.length != 0) { ( reportContext.etherToLockOnWithdrawalQueue, reportContext.sharesToBurnFromWithdrawalQueue ) = _calculateWithdrawals(contracts, _reportedData); if (reportContext.sharesToBurnFromWithdrawalQueue > 0) { IBurner(contracts.burner).requestBurnShares( contracts.withdrawalQueue, reportContext.sharesToBurnFromWithdrawalQueue ); } } // Step 4. // Pass the accounting values to sanity checker to smoothen positive token rebase uint256 withdrawals; uint256 elRewards; ( withdrawals, elRewards, reportContext.simulatedSharesToBurn, reportContext.sharesToBurn ) = IOracleReportSanityChecker(contracts.oracleReportSanityChecker).smoothenTokenRebase( reportContext.preTotalPooledEther, reportContext.preTotalShares, reportContext.preCLBalance, _reportedData.postCLBalance, _reportedData.withdrawalVaultBalance, _reportedData.elRewardsVaultBalance, _reportedData.sharesRequestedToBurn, reportContext.etherToLockOnWithdrawalQueue, reportContext.sharesToBurnFromWithdrawalQueue ); // Step 5. // Invoke finalization of the withdrawal requests (send ether to withdrawal queue, assign shares to be burnt) _collectRewardsAndProcessWithdrawals( contracts, withdrawals, elRewards, _reportedData.withdrawalFinalizationBatches, _reportedData.simulatedShareRate, reportContext.etherToLockOnWithdrawalQueue ); emit ETHDistributed( _reportedData.reportTimestamp, reportContext.preCLBalance, _reportedData.postCLBalance, withdrawals, elRewards, _getBufferedEther() ); // Step 6. // Burn the previously requested shares if (reportContext.sharesToBurn > 0) { IBurner(contracts.burner).commitSharesToBurn(reportContext.sharesToBurn); _burnShares(contracts.burner, reportContext.sharesToBurn); } // Step 7. // Distribute protocol fee (treasury & node operators) reportContext.sharesMintedAsFees = _processRewards( reportContext, _reportedData.postCLBalance, withdrawals, elRewards ); // Step 8. // Complete token rebase by informing observers (emit an event and call the external receivers if any) ( uint256 postTotalShares, uint256 postTotalPooledEther ) = _completeTokenRebase( _reportedData, reportContext, IPostTokenRebaseReceiver(contracts.postTokenRebaseReceiver) ); // Step 9. Sanity check for the provided simulated share rate if (_reportedData.withdrawalFinalizationBatches.length != 0) { IOracleReportSanityChecker(contracts.oracleReportSanityChecker).checkSimulatedShareRate( postTotalPooledEther, postTotalShares, reportContext.etherToLockOnWithdrawalQueue, reportContext.sharesToBurn.sub(reportContext.simulatedSharesToBurn), _reportedData.simulatedShareRate ); } return [postTotalPooledEther, postTotalShares, withdrawals, elRewards]; }
 
Take a snapshot and update the CL balance
Note that the preCLBalance is not loaded from storage directly, rather calculated as (_postClValidators - _preClValidators) * 32 ETH. This aims to calculate the CLBalance if there is no slash and withdrawal, so we can calculate the slash penalty as:
preCLBalance - (_postCLBalance + _withdrawalVaultBalance)
/* * @dev updates Consensus Layer state snapshot according to the current report * * NB: conventions and assumptions * * `depositedValidators` are total amount of the **ever** deposited Lido validators * `_postClValidators` are total amount of the **ever** appeared on the CL side Lido validators * * i.e., exited Lido validators persist in the state, just with a different status */ function _processClStateUpdate( uint256 _reportTimestamp, uint256 _preClValidators, uint256 _postClValidators, uint256 _postClBalance ) internal returns (uint256 preCLBalance) { uint256 depositedValidators = DEPOSITED_VALIDATORS_POSITION.getStorageUint256(); require(_postClValidators <= depositedValidators, "REPORTED_MORE_DEPOSITED"); require(_postClValidators >= _preClValidators, "REPORTED_LESS_VALIDATORS"); // CL_VALIDATORS records the cumulative active validators if (_postClValidators > _preClValidators) { CL_VALIDATORS_POSITION.setStorageUint256(_postClValidators); } // appeared validators from the last report time uint256 appearedValidators = _postClValidators - _preClValidators; preCLBalance = CL_BALANCE_POSITION.getStorageUint256(); // Take into account the balance of the newly appeared validators preCLBalance = preCLBalance.add(appearedValidators.mul(DEPOSIT_SIZE)); // Save the current CL balance and validators to // calculate rewards on the next push CL_BALANCE_POSITION.setStorageUint256(_postClBalance); emit CLValidatorsUpdated(_reportTimestamp, _preClValidators, _postClValidators); }
 
sanity check of report data
It checks:
  • _withdrawalVaultBalance in the report is smaller than the current actual ether balance in the withdrawalVault
  • _elRewardsVaultBalance in the report is smaller than the current actual ether balance in the elRewardsVault
  • _sharesRequestedToBurn in the report is smaller than the actualSharesToBurn in the Burner
  • consensus layer penalty before and after report doesn’t exceed limit
  • consensus layer annual increase rate before and after report doesn’t exceed limit. (the base is the CLBalance without slash and withdrawal)
  • increase of activated validators doesn’t cross limit per day
function checkAccountingOracleReport( uint256 _timeElapsed, uint256 _preCLBalance, uint256 _postCLBalance, uint256 _withdrawalVaultBalance, uint256 _elRewardsVaultBalance, uint256 _sharesRequestedToBurn, uint256 _preCLValidators, uint256 _postCLValidators ) external view { LimitsList memory limitsList = _limits.unpack(); address withdrawalVault = LIDO_LOCATOR.withdrawalVault(); // 1. Withdrawals vault reported balance _checkWithdrawalVaultBalance(withdrawalVault.balance, _withdrawalVaultBalance); address elRewardsVault = LIDO_LOCATOR.elRewardsVault(); // 2. EL rewards vault reported balance _checkELRewardsVaultBalance(elRewardsVault.balance, _elRewardsVaultBalance); // 3. Burn requests _checkSharesRequestedToBurn(_sharesRequestedToBurn); // 4. Consensus Layer one-off balance decrease _checkOneOffCLBalanceDecrease(limitsList, _preCLBalance, _postCLBalance + _withdrawalVaultBalance); // 5. Consensus Layer annual balances increase _checkAnnualBalancesIncrease(limitsList, _preCLBalance, _postCLBalance, _timeElapsed); // 6. Appeared validators increase if (_postCLValidators > _preCLValidators) { _checkAppearedValidatorsChurnLimit(limitsList, (_postCLValidators - _preCLValidators), _timeElapsed); } }
 
Pre-calculate the ether to lock for withdrawal queue and shares to be burnt
Before calculation, it first calls oracleReportSanityChecker.checkWithdrawalQueueOracleReport to check the validity of the withdrawalFinalizationBatches which checks that the last to-finalize withdrawal request has enough time elapse.
// lido/lido-dao/contracts/0.4.24/Lido.sol /** * @dev return amount to lock on withdrawal queue and shares to burn * depending on the finalization batch parameters */ function _calculateWithdrawals( OracleReportContracts memory _contracts, OracleReportedData memory _reportedData ) internal view returns ( uint256 etherToLock, uint256 sharesToBurn ) { IWithdrawalQueue withdrawalQueue = IWithdrawalQueue(_contracts.withdrawalQueue); if (!withdrawalQueue.isPaused()) { IOracleReportSanityChecker(_contracts.oracleReportSanityChecker).checkWithdrawalQueueOracleReport( _reportedData.withdrawalFinalizationBatches[_reportedData.withdrawalFinalizationBatches.length - 1], _reportedData.reportTimestamp ); (etherToLock, sharesToBurn) = withdrawalQueue.prefinalize( _reportedData.withdrawalFinalizationBatches, _reportedData.simulatedShareRate ); } }
 
Then it calls withdrawalQueue.prefinalize to calculate the ethToLock and sharesToBurn.
Lido calculate withdraw requests’ ethToLock and sharesToBurn based on batch calculation and _maxShareRate.
_batches  is array of ending request id. Each batch consist of the requests that all have the share rate below the _maxShareRate or above it (nominal or discounted). For example, below an example how 14 requests with different share rates will be split into 5 batches by:
| | • • | • • • • • |----------------------•------ _maxShareRate | • • • • • | • +-------------------------------> requestId | 1st| 2nd |3| 4th | 5th |
batches:
  • 1,2
  • 3,4,5,6
  • 7,
  • 8,9,10
  • 11,12,13,14
/// lido/lido-dao/contracts/0.8.9/WithdrawalQueueBase.sol /// @notice Checks finalization batches, calculates required ether and the amount of shares to burn /// @param _batches finalization batches calculated offchain using `calculateFinalizationBatches()` /// @param _maxShareRate max share rate that will be used for request finalization (1e27 precision) /// @return ethToLock amount of ether that should be sent with `finalize()` method /// @return sharesToBurn amount of shares that belongs to requests that will be finalized function prefinalize(uint256[] calldata _batches, uint256 _maxShareRate) external view returns (uint256 ethToLock, uint256 sharesToBurn) { if (_maxShareRate == 0) revert ZeroShareRate(); if (_batches.length == 0) revert EmptyBatches(); if (_batches[0] <= getLastFinalizedRequestId()) revert InvalidRequestId(_batches[0]); if (_batches[_batches.length - 1] > getLastRequestId()) revert InvalidRequestId(_batches[_batches.length - 1]); uint256 currentBatchIndex; uint256 prevBatchEndRequestId = getLastFinalizedRequestId(); WithdrawalRequest memory prevBatchEnd = _getQueue()[prevBatchEndRequestId]; while (currentBatchIndex < _batches.length) { uint256 batchEndRequestId = _batches[currentBatchIndex]; if (batchEndRequestId <= prevBatchEndRequestId) revert BatchesAreNotSorted(); WithdrawalRequest memory batchEnd = _getQueue()[batchEndRequestId]; (uint256 batchShareRate, uint256 stETH, uint256 shares) = _calcBatch(prevBatchEnd, batchEnd); if (batchShareRate > _maxShareRate) { // discounted ethToLock += shares * _maxShareRate / E27_PRECISION_BASE; } else { // nominal ethToLock += stETH; } sharesToBurn += shares; prevBatchEndRequestId = batchEndRequestId; prevBatchEnd = batchEnd; unchecked{ ++currentBatchIndex; } } } /// @dev calculate batch stats (shareRate, stETH and shares) for the range of `(_preStartRequest, _endRequest]` function _calcBatch(WithdrawalRequest memory _preStartRequest, WithdrawalRequest memory _endRequest) internal pure returns (uint256 shareRate, uint256 stETH, uint256 shares) { stETH = _endRequest.cumulativeStETH - _preStartRequest.cumulativeStETH; shares = _endRequest.cumulativeShares - _preStartRequest.cumulativeShares; shareRate = stETH * E27_PRECISION_BASE / shares; }
 
Pass the accounting values to sanity checker to smoothen positive token rebase
Lido has increase limitation on share rate. So there is function smoothenTokenRebase used to calculate the smoothed share rate.
it returns:
  • withdrawals : the actual to-withdraw ether from withdraw vault under the rebase restriction.
  • elRewards : the actual to-withdraw ether from execution layer reward vault under the rebase restriction.
  • simulatedSharesToBurn : the share rate without any withdraw requests under the rebase restriction. It will be used to verify the validity of reportData.simulatedShareRate .
  • sharesToBurn : the maximal share can be burnt considering the specified withdrawal requests under the rebase restriction.
Note that withdrawals and elRewards may not equal the ether balance in the withdraw vault and exection layer reward vault due to the rebase limit. So only part of them will be transferred to Lido and used to update the buffered ether and thus the total pooled ether and thus the share rate displayed to users during stake and withdraw.
/// lido/lido-dao/contracts/0.8.9/sanity_checks/OracleReportSanityChecker.sol /// @notice Returns the allowed ETH amount that might be taken from the withdrawal vault and EL /// rewards vault during Lido's oracle report processing /// @param _preTotalPooledEther total amount of ETH controlled by the protocol /// @param _preTotalShares total amount of minted stETH shares /// @param _preCLBalance sum of all Lido validators' balances on the Consensus Layer before the /// current oracle report /// @param _postCLBalance sum of all Lido validators' balances on the Consensus Layer after the /// current oracle report /// @param _withdrawalVaultBalance withdrawal vault balance on Execution Layer for the report calculation moment /// @param _elRewardsVaultBalance elRewards vault balance on Execution Layer for the report calculation moment /// @param _sharesRequestedToBurn shares requested to burn through Burner for the report calculation moment /// @param _etherToLockForWithdrawals ether to lock on withdrawals queue contract /// @param _newSharesToBurnForWithdrawals new shares to burn due to withdrawal request finalization /// @return withdrawals ETH amount allowed to be taken from the withdrawals vault /// @return elRewards ETH amount allowed to be taken from the EL rewards vault /// @return simulatedSharesToBurn simulated amount to be burnt (if no ether locked on withdrawals) /// @return sharesToBurn amount to be burnt (accounting for withdrawals finalization) function smoothenTokenRebase( uint256 _preTotalPooledEther, uint256 _preTotalShares, uint256 _preCLBalance, uint256 _postCLBalance, uint256 _withdrawalVaultBalance, uint256 _elRewardsVaultBalance, uint256 _sharesRequestedToBurn, uint256 _etherToLockForWithdrawals, uint256 _newSharesToBurnForWithdrawals ) external view returns ( uint256 withdrawals, uint256 elRewards, uint256 simulatedSharesToBurn, uint256 sharesToBurn ) { TokenRebaseLimiterData memory tokenRebaseLimiter = PositiveTokenRebaseLimiter.initLimiterState( getMaxPositiveTokenRebase(), _preTotalPooledEther, _preTotalShares ); if (_postCLBalance < _preCLBalance) { tokenRebaseLimiter.decreaseEther(_preCLBalance - _postCLBalance); } else { tokenRebaseLimiter.increaseEther(_postCLBalance - _preCLBalance); } withdrawals = tokenRebaseLimiter.increaseEther(_withdrawalVaultBalance); elRewards = tokenRebaseLimiter.increaseEther(_elRewardsVaultBalance); // determining the shares to burn limit that would have been // if no withdrawals finalized during the report // it's used to check later the provided `simulatedShareRate` value // after the off-chain calculation via `eth_call` of `Lido.handleOracleReport()` // see also step 9 of the `Lido._handleOracleReport()` simulatedSharesToBurn = Math256.min(tokenRebaseLimiter.getSharesToBurnLimit(), _sharesRequestedToBurn); // remove ether to lock for withdrawals from total pooled ether tokenRebaseLimiter.decreaseEther(_etherToLockForWithdrawals); // re-evaluate shares to burn after TVL was updated due to withdrawals finalization sharesToBurn = Math256.min( tokenRebaseLimiter.getSharesToBurnLimit(), _newSharesToBurnForWithdrawals + _sharesRequestedToBurn ); }
 
initLimiterState of library PositiveTokenRebaseLimiter is used to construct TokenRebaseLimiterData helping the process of smoothening the share rate.
In the decreaseEther and increaseEther, if rebase is unlimited, then there is no need to do any modification on the TokenRebaseLimiterData because the result of OracleReportSanityChecker.smoothenTokenRebase won’t be effected by rebase limit.
If rebase is not unlimited, and increase operation exceeds the limit, then it will calculate the max increasable amount and return to the function caller.
// lido/lido-dao/contracts/0.8.9/lib/PositiveTokenRebaseLimiter.sol /** * @dev Internal limiter representation struct (storing in memory) */ struct TokenRebaseLimiterData { uint256 preTotalPooledEther; // pre-rebase total pooled ether uint256 preTotalShares; // pre-rebase total shares uint256 currentTotalPooledEther; // intermediate total pooled ether amount while token rebase is in progress uint256 positiveRebaseLimit; // positive rebase limit (target value) with 1e9 precision (`LIMITER_PRECISION_BASE`) uint256 maxTotalPooledEther; // maximum total pooled ether that still fits into the positive rebase limit (cached) } /** * @dev Initialize the new `LimiterState` structure instance * @param _rebaseLimit max limiter value (saturation point), see `LIMITER_PRECISION_BASE` * @param _preTotalPooledEther pre-rebase total pooled ether, see `Lido.getTotalPooledEther()` * @param _preTotalShares pre-rebase total shares, see `Lido.getTotalShares()` * @return limiterState newly initialized limiter structure */ function initLimiterState( uint256 _rebaseLimit, uint256 _preTotalPooledEther, uint256 _preTotalShares ) internal pure returns (TokenRebaseLimiterData memory limiterState) { if (_rebaseLimit == 0) revert TooLowTokenRebaseLimit(); if (_rebaseLimit > UNLIMITED_REBASE) revert TooHighTokenRebaseLimit(); // special case if (_preTotalPooledEther == 0) { _rebaseLimit = UNLIMITED_REBASE; } limiterState.currentTotalPooledEther = limiterState.preTotalPooledEther = _preTotalPooledEther; limiterState.preTotalShares = _preTotalShares; limiterState.positiveRebaseLimit = _rebaseLimit; limiterState.maxTotalPooledEther = (_rebaseLimit == UNLIMITED_REBASE) ? type(uint256).max : limiterState.preTotalPooledEther + (limiterState.positiveRebaseLimit * limiterState.preTotalPooledEther) / LIMITER_PRECISION_BASE; } /** * @dev increase total pooled ether up to the limit and return the consumed value (not exceeding the limit) * @param _limiterState limit repr struct * @param _etherAmount desired ether addition * @return consumedEther appended ether still not exceeding the limit */ function increaseEther( TokenRebaseLimiterData memory _limiterState, uint256 _etherAmount ) internal pure returns (uint256 consumedEther) { if (_limiterState.positiveRebaseLimit == UNLIMITED_REBASE) return _etherAmount; uint256 prevPooledEther = _limiterState.currentTotalPooledEther; _limiterState.currentTotalPooledEther += _etherAmount; _limiterState.currentTotalPooledEther = Math256.min(_limiterState.currentTotalPooledEther, _limiterState.maxTotalPooledEther); assert(_limiterState.currentTotalPooledEther >= prevPooledEther); return _limiterState.currentTotalPooledEther - prevPooledEther; } /** * @notice decrease total pooled ether by the given amount of ether * @param _limiterState limit repr struct * @param _etherAmount amount of ether to decrease */ function decreaseEther( TokenRebaseLimiterData memory _limiterState, uint256 _etherAmount ) internal pure { if (_limiterState.positiveRebaseLimit == UNLIMITED_REBASE) return; if (_etherAmount > _limiterState.currentTotalPooledEther) revert NegativeTotalPooledEther(); _limiterState.currentTotalPooledEther -= _etherAmount; }
 
library PositiveTokenRebaseLimiter uses function getSharesToBurnLimit to calculate the limit burnable shares under rebase limit restriction.
We name:
  • pre total pooled ether :
  • pre total shares:
  • shares to burn limit :
  • current total pooled ether :
  • rebase limit :
The satisfies:
Then we can calculate:
// lido/lido-dao/contracts/0.8.9/lib/PositiveTokenRebaseLimiter.sol /** * @dev Internal limiter representation struct (storing in memory) */ struct TokenRebaseLimiterData { uint256 preTotalPooledEther; // pre-rebase total pooled ether uint256 preTotalShares; // pre-rebase total shares uint256 currentTotalPooledEther; // intermediate total pooled ether amount while token rebase is in progress uint256 positiveRebaseLimit; // positive rebase limit (target value) with 1e9 precision (`LIMITER_PRECISION_BASE`) uint256 maxTotalPooledEther; // maximum total pooled ether that still fits into the positive rebase limit (cached) } /** * @dev return shares to burn value not exceeding the limit * @param _limiterState limit repr struct * @return maxSharesToBurn allowed to deduct shares to not exceed the limit */ function getSharesToBurnLimit(TokenRebaseLimiterData memory _limiterState) internal pure returns (uint256 maxSharesToBurn) { if (_limiterState.positiveRebaseLimit == UNLIMITED_REBASE) return _limiterState.preTotalShares; if (isLimitReached(_limiterState)) return 0; uint256 rebaseLimitPlus1 = _limiterState.positiveRebaseLimit + LIMITER_PRECISION_BASE; uint256 pooledEtherRate = (_limiterState.currentTotalPooledEther * LIMITER_PRECISION_BASE) / _limiterState.preTotalPooledEther; maxSharesToBurn = (_limiterState.preTotalShares * (rebaseLimitPlus1 - pooledEtherRate)) / rebaseLimitPlus1; }
After we the smooth operation, Lido will collect calculated the collectable withdraw and execute layer reward vault ethers, and calls withdrawQueue to finalize withdraw requests.
Inside:
  • withdraw execution layer rewards and put them to the buffer
  • withdraw withdrawals and put them to the buffer
  • finalize withdrawals (send ether, assign shares for burning)
  • update buffered ethers
/** * @dev collect ETH from ELRewardsVault and WithdrawalVault, then send to WithdrawalQueue */ function _collectRewardsAndProcessWithdrawals( OracleReportContracts memory _contracts, uint256 _withdrawalsToWithdraw, uint256 _elRewardsToWithdraw, uint256[] _withdrawalFinalizationBatches, uint256 _simulatedShareRate, uint256 _etherToLockOnWithdrawalQueue ) internal { // withdraw execution layer rewards and put them to the buffer if (_elRewardsToWithdraw > 0) { ILidoExecutionLayerRewardsVault(_contracts.elRewardsVault).withdrawRewards(_elRewardsToWithdraw); } // withdraw withdrawals and put them to the buffer if (_withdrawalsToWithdraw > 0) { IWithdrawalVault(_contracts.withdrawalVault).withdrawWithdrawals(_withdrawalsToWithdraw); } // finalize withdrawals (send ether, assign shares for burning) if (_etherToLockOnWithdrawalQueue > 0) { IWithdrawalQueue withdrawalQueue = IWithdrawalQueue(_contracts.withdrawalQueue); withdrawalQueue.finalize.value(_etherToLockOnWithdrawalQueue)( _withdrawalFinalizationBatches[_withdrawalFinalizationBatches.length - 1], _simulatedShareRate ); } uint256 postBufferedEther = _getBufferedEther() .add(_elRewardsToWithdraw) // Collected from ELVault .add(_withdrawalsToWithdraw) // Collected from WithdrawalVault .sub(_etherToLockOnWithdrawalQueue); // Sent to WithdrawalQueue _setBufferedEther(postBufferedEther); }
 
withdrawVault and LidoExecutionLayerRewardsVault send ethers to Lido.
// lido/lido-dao/contracts/0.8.9/LidoExecutionLayerRewardsVault.sol /** * @notice Withdraw all accumulated rewards to Lido contract * @dev Can be called only by the Lido contract * @param _maxAmount Max amount of ETH to withdraw * @return amount of funds received as execution layer rewards (in wei) */ function withdrawRewards(uint256 _maxAmount) external returns (uint256 amount) { require(msg.sender == LIDO, "ONLY_LIDO_CAN_WITHDRAW"); uint256 balance = address(this).balance; amount = (balance > _maxAmount) ? _maxAmount : balance; if (amount > 0) { ILido(LIDO).receiveELRewards{value: amount}(); } return amount; }
// lido/lido-dao/contracts/0.8.9/WithdrawalVault.sol /** * @notice Withdraw `_amount` of accumulated withdrawals to Lido contract * @dev Can be called only by the Lido contract * @param _amount amount of ETH to withdraw */ function withdrawWithdrawals(uint256 _amount) external { if (msg.sender != address(LIDO)) { revert NotLido(); } if (_amount == 0) { revert ZeroAmount(); } uint256 balance = address(this).balance; if (_amount > balance) { revert NotEnoughEther(_amount, balance); } LIDO.receiveWithdrawals{value: _amount}(); }
 
Inside the WithdrawalQueueERC721.finalize:
  • check whether the contract has been paused
  • check the caller has FINALIZE_ROLE
  • finalize requests from the next unfinalized request to the last to-finalize request specified by the caller
Note that withdrawQueue uses checkpoint to record batch requests’ max share rate which will be used to calcualte each request’s actual share rate(discount or normal).
/// lido/lido-dao/contracts/0.8.9/WithdrawalQueueERC721.sol /// @notice structure to store discounts for requests that are affected by negative rebase struct Checkpoint { uint256 fromRequestId; uint256 maxShareRate; } /// @notice Finalize requests from last finalized one up to `_lastRequestIdToBeFinalized` /// @dev ether to finalize all the requests should be calculated using `prefinalize()` and sent along function finalize(uint256 _lastRequestIdToBeFinalized, uint256 _maxShareRate) external payable { // check whether the contract has been paused _checkResumed(); // check the caller has FINALIZE_ROLE _checkRole(FINALIZE_ROLE, msg.sender); uint256 firstFinalizedRequestId = getLastFinalizedRequestId() + 1; // finalize requests from the next unfinalized request to the last to-finalize request specified by the caller _finalize(_lastRequestIdToBeFinalized, msg.value, _maxShareRate); // ERC4906 metadata update event // We are updating all unfinalized to make it look different as they move closer to finalization in the future emit BatchMetadataUpdate(firstFinalizedRequestId, getLastRequestId()); } /// @dev Finalize requests in the queue /// Emits WithdrawalsFinalized event. function _finalize(uint256 _lastRequestIdToBeFinalized, uint256 _amountOfETH, uint256 _maxShareRate) internal { // check the last to-finalize withdraw request doesn't exceed the last registerded withdraw request if (_lastRequestIdToBeFinalized > getLastRequestId()) revert InvalidRequestId(_lastRequestIdToBeFinalized); // check the last to-finalize withdraw request has not been finalized uint256 lastFinalizedRequestId = getLastFinalizedRequestId(); if (_lastRequestIdToBeFinalized <= lastFinalizedRequestId) revert InvalidRequestId(_lastRequestIdToBeFinalized); // get the detailed data of request // the last finalized request is used to calcualte the diff ether and shares, because withdrawQuese only records cumulative data WithdrawalRequest memory lastFinalizedRequest = _getQueue()[lastFinalizedRequestId]; WithdrawalRequest memory requestToFinalize = _getQueue()[_lastRequestIdToBeFinalized]; // calculate the to finalize stETH(ether) uint128 stETHToFinalize = requestToFinalize.cumulativeStETH - lastFinalizedRequest.cumulativeStETH; // check to finalize stETH(ether) doesn't exceed the transfered-in ethers if (_amountOfETH > stETHToFinalize) revert TooMuchEtherToFinalize(_amountOfETH, stETHToFinalize); uint256 firstRequestIdToFinalize = lastFinalizedRequestId + 1; uint256 lastCheckpointIndex = getLastCheckpointIndex(); // add a new checkpoint with current finalization max share rate _getCheckpoints()[lastCheckpointIndex + 1] = Checkpoint(firstRequestIdToFinalize, _maxShareRate); _setLastCheckpointIndex(lastCheckpointIndex + 1); // update locked ether amount _setLockedEtherAmount(getLockedEtherAmount() + _amountOfETH); // update the last finalized request id _setLastFinalizedRequestId(_lastRequestIdToBeFinalized); emit WithdrawalsFinalized( firstRequestIdToFinalize, _lastRequestIdToBeFinalized, _amountOfETH, requestToFinalize.cumulativeShares - lastFinalizedRequest.cumulativeShares, block.timestamp ); }
 
Burn the previously requested shares
Lido has a burner contract which plays a role to record burnt shares.
To burn shares, there is three steps:
  • request : transfer shares to the burner and update the burn request on the burner.
  • commit: commit to burn shares, which update the burn record in the burner
  • burn: burn the shares which happens in the Lido
There is two type of to-burn shares:
  • cover: used to cover losses in staking. For example, if there is a significant loss in staking, there may be a party taking the responsibility to cover the loss. It can add ether to the pool and burn its shares to increase the share rate which compensates users’ losses.
  • non-cover: any other case like withdraw stETH where shares will be burnt for ethers.
In the previous step, Lido has already transferred to-burn share to the Burner and update the to-burn request. Then it commites the burn request and finally burn the shares.
// lido/lido-dao/contracts/0.4.24/Lido.sol function _handleOracleReport(OracleReportedData memory _reportedData) internal returns (uint256[4]) { ... // Pre-calculate the ether to lock for withdrawal queue and shares to be burnt // due to withdrawal requests to finalize if (_reportedData.withdrawalFinalizationBatches.length != 0) { ( reportContext.etherToLockOnWithdrawalQueue, reportContext.sharesToBurnFromWithdrawalQueue ) = _calculateWithdrawals(contracts, _reportedData); if (reportContext.sharesToBurnFromWithdrawalQueue > 0) { IBurner(contracts.burner).requestBurnShares( contracts.withdrawalQueue, reportContext.sharesToBurnFromWithdrawalQueue ); } } if (reportContext.sharesToBurn > 0) { IBurner(contracts.burner).commitSharesToBurn(reportContext.sharesToBurn); _burnShares(contracts.burner, reportContext.sharesToBurn); } ... } /** * @notice Destroys `_sharesAmount` shares from `_account`'s holdings, decreasing the total amount of shares. * @dev This doesn't decrease the token total supply. * * Requirements: * * - `_account` cannot be the zero address. * - `_account` must hold at least `_sharesAmount` shares. * - the contract must not be paused. */ function _burnShares(address _account, uint256 _sharesAmount) internal returns (uint256 newTotalShares) { require(_account != address(0), "BURN_FROM_ZERO_ADDR"); uint256 accountShares = shares[_account]; require(_sharesAmount <= accountShares, "BALANCE_EXCEEDED"); uint256 preRebaseTokenAmount = getPooledEthByShares(_sharesAmount); newTotalShares = _getTotalShares().sub(_sharesAmount); TOTAL_SHARES_POSITION.setStorageUint256(newTotalShares); shares[_account] = accountShares.sub(_sharesAmount); uint256 postRebaseTokenAmount = getPooledEthByShares(_sharesAmount); emit SharesBurnt(_account, preRebaseTokenAmount, postRebaseTokenAmount, _sharesAmount); // Notice: we're not emitting a Transfer event to the zero address here since shares burn // works by redistributing the amount of tokens corresponding to the burned shares between // all other token holders. The total supply of the token doesn't change as the result. // This is equivalent to performing a send from `address` to each other token holder address, // but we cannot reflect this as it would require sending an unbounded number of events. // We're emitting `SharesBurnt` event to provide an explicit rebase log record nonetheless. }
// lido/lido-dao/contracts/0.8.9/Burner.sol /** * @notice BE CAREFUL, the provided stETH will be burnt permanently. * * Transfers `_sharesAmountToBurn` stETH shares from `_from` and irreversibly locks these * on the burner contract address. Marks the shares amount for burning * by increasing the `coverSharesBurnRequested` counter. * * @param _from address to transfer shares from * @param _sharesAmountToBurn stETH shares to burn * */ function requestBurnSharesForCover(address _from, uint256 _sharesAmountToBurn) external onlyRole(REQUEST_BURN_SHARES_ROLE) { uint256 stETHAmount = IStETH(STETH).transferSharesFrom(_from, address(this), _sharesAmountToBurn); _requestBurn(_sharesAmountToBurn, stETHAmount, true /* _isCover */); } function _requestBurn(uint256 _sharesAmount, uint256 _stETHAmount, bool _isCover) private { if (_sharesAmount == 0) revert ZeroBurnAmount(); emit StETHBurnRequested(_isCover, msg.sender, _stETHAmount, _sharesAmount); if (_isCover) { coverSharesBurnRequested += _sharesAmount; } else { nonCoverSharesBurnRequested += _sharesAmount; } } /** * Commit cover/non-cover burning requests and logs cover/non-cover shares amount just burnt. * * NB: The real burn enactment to be invoked after the call (via internal Lido._burnShares()) * * Increments `totalCoverSharesBurnt` and `totalNonCoverSharesBurnt` counters. * Decrements `coverSharesBurnRequested` and `nonCoverSharesBurnRequested` counters. * Does nothing if zero amount passed. * * @param _sharesToBurn amount of shares to be burnt */ function commitSharesToBurn(uint256 _sharesToBurn) external virtual override { if (msg.sender != STETH) revert AppAuthLidoFailed(); if (_sharesToBurn == 0) { return; } uint256 memCoverSharesBurnRequested = coverSharesBurnRequested; uint256 memNonCoverSharesBurnRequested = nonCoverSharesBurnRequested; uint256 burnAmount = memCoverSharesBurnRequested + memNonCoverSharesBurnRequested; if (_sharesToBurn > burnAmount) { revert BurnAmountExceedsActual(_sharesToBurn, burnAmount); } uint256 sharesToBurnNow; if (memCoverSharesBurnRequested > 0) { uint256 sharesToBurnNowForCover = Math.min(_sharesToBurn, memCoverSharesBurnRequested); totalCoverSharesBurnt += sharesToBurnNowForCover; uint256 stETHToBurnNowForCover = IStETH(STETH).getPooledEthByShares(sharesToBurnNowForCover); emit StETHBurnt(true /* isCover */, stETHToBurnNowForCover, sharesToBurnNowForCover); coverSharesBurnRequested -= sharesToBurnNowForCover; sharesToBurnNow += sharesToBurnNowForCover; } if (memNonCoverSharesBurnRequested > 0 && sharesToBurnNow < _sharesToBurn) { uint256 sharesToBurnNowForNonCover = Math.min( _sharesToBurn - sharesToBurnNow, memNonCoverSharesBurnRequested ); totalNonCoverSharesBurnt += sharesToBurnNowForNonCover; uint256 stETHToBurnNowForNonCover = IStETH(STETH).getPooledEthByShares(sharesToBurnNowForNonCover); emit StETHBurnt(false /* isCover */, stETHToBurnNowForNonCover, sharesToBurnNowForNonCover); nonCoverSharesBurnRequested -= sharesToBurnNowForNonCover; sharesToBurnNow += sharesToBurnNowForNonCover; } assert(sharesToBurnNow == _sharesToBurn); }
 
Distribute protocol fee
Lido distributes protocol fee by minting corresponding shares to qualified staking module’s fee recipient address.
inside the Lido._processRewards:
  • calculate the post cl total balance.
  • check whether there is reward, if there is, then distribute fee.
Note that the _withdrawnWithdrawals here is the adjusted withdraw vault balance considered the rebase limit. And the _reportContext.preCLBalance is the pre CL balance plus the initial deposited ethers(32 eth) of the activated validators during oracle report interval. So that postCLTotalBalance - preCLBalance represens the reward during the report interval.
// lido/lido-dao/contracts/0.4.24/Lido.sol /** * @dev calculate the amount of rewards and distribute it */ function _processRewards( OracleReportContext memory _reportContext, uint256 _postCLBalance, uint256 _withdrawnWithdrawals, uint256 _withdrawnElRewards ) internal returns (uint256 sharesMintedAsFees) { uint256 postCLTotalBalance = _postCLBalance.add(_withdrawnWithdrawals); // Don’t mint/distribute any protocol fee on the non-profitable Lido oracle report // (when consensus layer balance delta is zero or negative). // See LIP-12 for details: // https://research.lido.fi/t/lip-12-on-chain-part-of-the-rewards-distribution-after-the-merge/1625 if (postCLTotalBalance > _reportContext.preCLBalance) { uint256 consensusLayerRewards = postCLTotalBalance - _reportContext.preCLBalance; sharesMintedAsFees = _distributeFee( _reportContext.preTotalPooledEther, _reportContext.preTotalShares, consensusLayerRewards.add(_withdrawnElRewards) ); } }
 
Inside the _distributeFee:
  • calls router.getStakingRewardsDistribution to get reward distributon information which includes fee proportion shared by each staking module and the treasury.
  • calculate shares to be minted and mint it to Lido first
  • transfer shares to each module
  • transfer shares to treasury
  • report to each staking module that reward has been minted where staking module can implement their specific logic to handle the accepted fees.
Lido mints shares to staking modules and treasury so that they can get corresponding stETH as fee. The new share amount satisfy two equations:
So we can calculate shares2mint:
 
Note that in the Lido’s code, it uses pre-withdraw data to calculate the shares2mint, the preTotalShares and preTotalPooledEther are not the actual data at the time of protocol fee distribution, because they doesn’t reflect the locked ethers and burnt shares to finalize withdraw requests. So the actual ethers represented by the minted share may differ with the calcualted protocol fee.
// lido/lido-dao/contracts/0.4.24/Lido.sol /** * @dev Staking router rewards distribution. * * Corresponds to the return value of `IStakingRouter.newTotalPooledEtherForRewards()` * Prevents `stack too deep` issue. */ struct StakingRewardsDistribution { address[] recipients; uint256[] moduleIds; uint96[] modulesFees; uint96 totalFee; uint256 precisionPoints; } /** * @dev Distributes fee portion of the rewards by minting and distributing corresponding amount of liquid tokens. * @param _preTotalPooledEther Total supply before report-induced changes applied * @param _preTotalShares Total shares before report-induced changes applied * @param _totalRewards Total rewards accrued both on the Execution Layer and the Consensus Layer sides in wei. */ function _distributeFee( uint256 _preTotalPooledEther, uint256 _preTotalShares, uint256 _totalRewards ) internal returns (uint256 sharesMintedAsFees) { // get fee distribution details includes proportion of fees shared by each staking module. ( StakingRewardsDistribution memory rewardsDistribution, IStakingRouter router ) = _getStakingRewardsDistribution(); if (rewardsDistribution.totalFee > 0) { uint256 totalPooledEtherWithRewards = _preTotalPooledEther.add(_totalRewards); // calculate total to-mint shares sharesMintedAsFees = _totalRewards.mul(rewardsDistribution.totalFee).mul(_preTotalShares).div( totalPooledEtherWithRewards.mul( rewardsDistribution.precisionPoints ).sub(_totalRewards.mul(rewardsDistribution.totalFee)) ); // mint shares to Lido _mintShares(address(this), sharesMintedAsFees); // transfer shares to each module's fee recipient address (uint256[] memory moduleRewards, uint256 totalModuleRewards) = _transferModuleRewards( rewardsDistribution.recipients, rewardsDistribution.modulesFees, rewardsDistribution.totalFee, sharesMintedAsFees ); // transfer shares belongs to the lido treasury _transferTreasuryRewards(sharesMintedAsFees.sub(totalModuleRewards)); // report to each staking module that reward has been minted // which will calls each staking module's onRewardsMinted function if there is some logic to execute router.reportRewardsMinted( rewardsDistribution.moduleIds, moduleRewards ); } } /** * @dev Get staking rewards distribution from staking router. */ function _getStakingRewardsDistribution() internal view returns ( StakingRewardsDistribution memory ret, IStakingRouter router ) { router = _stakingRouter(); ( ret.recipients, ret.moduleIds, ret.modulesFees, ret.totalFee, ret.precisionPoints ) = router.getStakingRewardsDistribution(); require(ret.recipients.length == ret.modulesFees.length, "WRONG_RECIPIENTS_INPUT"); require(ret.moduleIds.length == ret.modulesFees.length, "WRONG_MODULE_IDS_INPUT"); } function _transferModuleRewards( address[] memory recipients, uint96[] memory modulesFees, uint256 totalFee, uint256 totalRewards ) internal returns (uint256[] memory moduleRewards, uint256 totalModuleRewards) { moduleRewards = new uint256[](recipients.length); for (uint256 i; i < recipients.length; ++i) { if (modulesFees[i] > 0) { // calculate fee belongs to this staking module uint256 iModuleRewards = totalRewards.mul(modulesFees[i]).div(totalFee); moduleRewards[i] = iModuleRewards; // transfer shares to the fee receipient of the module _transferShares(address(this), recipients[i], iModuleRewards); _emitTransferAfterMintingShares(recipients[i], iModuleRewards); totalModuleRewards = totalModuleRewards.add(iModuleRewards); } } }
 
Inside the StakingRouter.getStakingRewardsDistribution, it calculate each staking module’s proportion of fee.
We name:
  • activeValidatorsCount:
  • totalActiveValidators:
  • stakingModuleFee :
  • treasuryFee :
The total fee proportion assigned to a staking module is the active validators proportion of the staking module.
And each staking module has stakingModuleFee and treasuryFee config. The former indicates the proportion of fee given to the module. And the latter indicates the proportion of fee given to the Lido treasury.
So we can calcualte the actual fee proportion of one module as:
 
Also stopped module can’t accept fee, its fee will be distribute to other qualified modules. (stakingModuleFee / totalFee is the final fee proportion of each module)
// lido/lido-dao/contracts/0.8.9/StakingRouter.sol /** * @notice Return shares table * * @return recipients rewards recipient addresses corresponding to each module * @return stakingModuleIds module IDs * @return stakingModuleFees fee of each recipient * @return totalFee total fee to mint for each staking module and treasury * @return precisionPoints base precision number, which constitutes 100% fee */ function getStakingRewardsDistribution() public view returns ( address[] memory recipients, uint256[] memory stakingModuleIds, uint96[] memory stakingModuleFees, uint96 totalFee, uint256 precisionPoints ) { (uint256 totalActiveValidators, StakingModuleCache[] memory stakingModulesCache) = _loadStakingModulesCache(); uint256 stakingModulesCount = stakingModulesCache.length; /// @dev return empty response if there are no staking modules or active validators yet if (stakingModulesCount == 0 || totalActiveValidators == 0) { return (new address[](0), new uint256[](0), new uint96[](0), 0, FEE_PRECISION_POINTS); } precisionPoints = FEE_PRECISION_POINTS; stakingModuleIds = new uint256[](stakingModulesCount); recipients = new address[](stakingModulesCount); stakingModuleFees = new uint96[](stakingModulesCount); uint256 rewardedStakingModulesCount = 0; uint256 stakingModuleValidatorsShare; uint96 stakingModuleFee; for (uint256 i; i < stakingModulesCount; ) { /// @dev skip staking modules which have no active validators if (stakingModulesCache[i].activeValidatorsCount > 0) { stakingModuleIds[rewardedStakingModulesCount] = stakingModulesCache[i].stakingModuleId; stakingModuleValidatorsShare = ((stakingModulesCache[i].activeValidatorsCount * precisionPoints) / totalActiveValidators); recipients[rewardedStakingModulesCount] = address(stakingModulesCache[i].stakingModuleAddress); stakingModuleFee = uint96((stakingModuleValidatorsShare * stakingModulesCache[i].stakingModuleFee) / TOTAL_BASIS_POINTS); /// @dev if the staking module has the `Stopped` status for some reason, then /// the staking module's rewards go to the treasury, so that the DAO has ability /// to manage them (e.g. to compensate the staking module in case of an error, etc.) if (stakingModulesCache[i].status != StakingModuleStatus.Stopped) { stakingModuleFees[rewardedStakingModulesCount] = stakingModuleFee; } // else keep stakingModuleFees[rewardedStakingModulesCount] = 0, but increase totalFee totalFee += (uint96((stakingModuleValidatorsShare * stakingModulesCache[i].treasuryFee) / TOTAL_BASIS_POINTS) + stakingModuleFee); unchecked { rewardedStakingModulesCount++; } } unchecked { ++i; } } // Total fee never exceeds 100% assert(totalFee <= precisionPoints); /// @dev shrink arrays if (rewardedStakingModulesCount < stakingModulesCount) { assembly { mstore(stakingModuleIds, rewardedStakingModulesCount) mstore(recipients, rewardedStakingModulesCount) mstore(stakingModuleFees, rewardedStakingModulesCount) } } }
 
If postTokenRebaseReceiver is not zero address, Lido will call the function handlePostTokenRebase of it to execute logic post rebase.
function _handleOracleReport(OracleReportedData memory _reportedData) internal returns (uint256[4]) { ... // Complete token rebase by informing observers (emit an event and call the external receivers if any) ( uint256 postTotalShares, uint256 postTotalPooledEther ) = _completeTokenRebase( _reportedData, reportContext, IPostTokenRebaseReceiver(contracts.postTokenRebaseReceiver) ); ... } /** * @dev Notify observers about the completed token rebase. * Emit events and call external receivers. */ function _completeTokenRebase( OracleReportedData memory _reportedData, OracleReportContext memory _reportContext, IPostTokenRebaseReceiver _postTokenRebaseReceiver ) internal returns (uint256 postTotalShares, uint256 postTotalPooledEther) { postTotalShares = _getTotalShares(); postTotalPooledEther = _getTotalPooledEther(); if (_postTokenRebaseReceiver != address(0)) { _postTokenRebaseReceiver.handlePostTokenRebase( _reportedData.reportTimestamp, _reportedData.timeElapsed, _reportContext.preTotalShares, _reportContext.preTotalPooledEther, postTotalShares, postTotalPooledEther, _reportContext.sharesMintedAsFees ); } emit TokenRebased( _reportedData.reportTimestamp, _reportedData.timeElapsed, _reportContext.preTotalShares, _reportContext.preTotalPooledEther, postTotalShares, postTotalPooledEther, _reportContext.sharesMintedAsFees ); }
 

Claim

Users can call WithdrawalQueueERC721.claimWithdrawals to claim ethers. To withdraw, user needs to specify the request ids and the corresponding checkpoint id which is to load the max share rate used to calculate the corresponding ethers of requests.
Inside the process:
  • check the amount of requests and hints(checkpoint ids) matchs
  • check the request has be finalized
  • check the request has not be claimed
  • check caller is the owner of the request
  • update request's status to suggest that it has been claimed
  • calculate claimable ethers using checkpoint considering the maxShareRate
  • update locked ether amount
  • send ethers to the recipient
//lido/lido-dao/contracts/0.8.9/WithdrawalQueueERC721.sol /// @notice Claim a batch of withdrawal requests if they are finalized sending locked ether to the owner /// @param _requestIds array of request ids to claim /// @param _hints checkpoint hint for each id. Can be obtained with `findCheckpointHints()` /// @dev /// Reverts if requestIds and hints arrays length differs /// Reverts if any requestId or hint in arguments are not valid /// Reverts if any request is not finalized or already claimed /// Reverts if msg sender is not an owner of the requests function claimWithdrawals(uint256[] calldata _requestIds, uint256[] calldata _hints) external { // check the amount of requests and hints(checkpoint ids) matchs if (_requestIds.length != _hints.length) { revert ArraysLengthMismatch(_requestIds.length, _hints.length); } for (uint256 i = 0; i < _requestIds.length; ++i) { _claim(_requestIds[i], _hints[i], msg.sender); _emitTransfer(msg.sender, address(0), _requestIds[i]); } } /// @dev Claim the request and transfer locked ether to `_recipient`. /// Emits WithdrawalClaimed event /// @param _requestId id of the request to claim /// @param _hint hint the checkpoint to use. Can be obtained by calling `findCheckpointHint()` /// @param _recipient address to send ether to function _claim(uint256 _requestId, uint256 _hint, address _recipient) internal { if (_requestId == 0) revert InvalidRequestId(_requestId); // check the request has be finalized if (_requestId > getLastFinalizedRequestId()) revert RequestNotFoundOrNotFinalized(_requestId); // load request data WithdrawalRequest storage request = _getQueue()[_requestId]; // check the request has not be claimed if (request.claimed) revert RequestAlreadyClaimed(_requestId); // only request owner can withdraw if (request.owner != msg.sender) revert NotOwner(msg.sender, request.owner); // update request's status request.claimed = true; assert(_getRequestsByOwner()[request.owner].remove(_requestId)); // calculate claimable ethers uint256 ethWithDiscount = _calculateClaimableEther(request, _requestId, _hint); // update locked ether amount // send ethers to the recipient // because of the stETH rounding issue // (issue: https://github.com/lidofinance/lido-dao/issues/442 ) // some dust (1-2 wei per request) will be accumulated upon claiming _setLockedEtherAmount(getLockedEtherAmount() - ethWithDiscount); _sendValue(_recipient, ethWithDiscount); emit WithdrawalClaimed(_requestId, msg.sender, _recipient, ethWithDiscount); } /// @dev Calculates ether value for the request using the provided hint. Checks if hint is valid /// @return claimableEther discounted eth for `_requestId` function _calculateClaimableEther(WithdrawalRequest storage _request, uint256 _requestId, uint256 _hint) internal view returns (uint256 claimableEther) { if (_hint == 0) revert InvalidHint(_hint); uint256 lastCheckpointIndex = getLastCheckpointIndex(); if (_hint > lastCheckpointIndex) revert InvalidHint(_hint); Checkpoint memory checkpoint = _getCheckpoints()[_hint]; // Reverts if requestId is not in range [checkpoint[hint], checkpoint[hint+1]) // ______(>______ // ^ hint if (_requestId < checkpoint.fromRequestId) revert InvalidHint(_hint); // if _hint is the lastCheckpointIndex, then no need to check below condition. because we have already checked the // request has been finalized, so its config is stored in the last checkpoint if (_hint < lastCheckpointIndex) { // ______(>______(>________ // hint hint+1 ^ Checkpoint memory nextCheckpoint = _getCheckpoints()[_hint + 1]; if (nextCheckpoint.fromRequestId <= _requestId) revert InvalidHint(_hint); } WithdrawalRequest memory prevRequest = _getQueue()[_requestId - 1]; (uint256 batchShareRate, uint256 eth, uint256 shares) = _calcBatch(prevRequest, _request); // if the share rate is greater than the maxShareRate, use the maxShareRate instead // so the withdraw is discounted. if (batchShareRate > checkpoint.maxShareRate) { eth = shares * checkpoint.maxShareRate / E27_PRECISION_BASE; } return eth; } /// @dev calculate batch stats (shareRate, stETH and shares) for the range of `(_preStartRequest, _endRequest]` function _calcBatch(WithdrawalRequest memory _preStartRequest, WithdrawalRequest memory _endRequest) internal pure returns (uint256 shareRate, uint256 stETH, uint256 shares) { stETH = _endRequest.cumulativeStETH - _preStartRequest.cumulativeStETH; shares = _endRequest.cumulativeShares - _preStartRequest.cumulativeShares; shareRate = stETH * E27_PRECISION_BASE / shares; }

Stake inside Lido

Lido V2.0 uses modular architectural design, meaning ethers can be distributed to different modules which can contain multiple node operators, and each operator can contain multiple validators. This feature makes it possible to develop on-ramps for new Node Operators, ranging from solo stakers, to DAOs and  Distributed Validator Technology (DVT)clusters.
Staked ethers will flow from Lido.sol to different Staking Modules decided by Lido (off-chain decision). Inside the Staking Module, it will use an algorithm to divide ethers between different staking modules and node operators (on-chain algorithm). And node operators will provide validators’ public key and signature to accept ethers. The picture below shows the ethers flow.
Note that the allocation of ethers is decided by on-chain implemented algorithm called Min First Allocation Strategy which aims to fill from the least populated buckets to equalize the fill factor.
notion image
When user calls submit to “stake” ethers and get stETH, ethers are not staked into the consensus layer directly, but stored in the Lido.sol contract as buffer. (Because to stake on the consensus layer, there need some available validator).

DepositSecurityModule.depositBufferedEther

Lido uses Deposit Security Committee to monitor the history of deposits and the set of Lido keys available for the deposit, and sign and disseminate messages allowing deposits.
After having collected enough messages, committee call DepositSecurityModule.depositBufferedEther to trigger the stake on consensus layer(current quorum is 4/6).
//lido/lido-dao/contracts/0.8.9/DepositSecurityModule.sol function depositBufferedEther( uint256 blockNumber, bytes32 blockHash, bytes32 depositRoot, //the root of the Beacon chain deposit contract uint256 stakingModuleId, // the id of the StakingModule to accpet stake ether uint256 nonce, // status nonce of the StakingModule bytes calldata depositCalldata, // not-used currently Signature[] calldata sortedGuardianSignatures // signatures of guardians ) external { //check check quorum condition if (quorum == 0 || sortedGuardianSignatures.length < quorum) revert DepositNoQuorum(); //check the message's time validity relative to the deposit root of the beacon chain deposit contract bytes32 onchainDepositRoot = IDepositContract(DEPOSIT_CONTRACT).get_deposit_root(); if (depositRoot != onchainDepositRoot) revert DepositRootChanged(); //check the stakingModule specified by stakingModuleId is active if (!STAKING_ROUTER.getStakingModuleIsActive(stakingModuleId)) revert DepositInactiveModule(); //Confirm that minDepositBlockDistance time has elapsed since the last deposit of this StakingModule. uint256 lastDepositBlock = STAKING_ROUTER.getStakingModuleLastDepositBlock(stakingModuleId); if (block.number - lastDepositBlock < minDepositBlockDistance) revert DepositTooFrequent(); if (blockHash == bytes32(0) || blockhash(blockNumber) != blockHash) revert DepositUnexpectedBlockHash(); //nonce's functionality: //1:To prevent message replay attack //2:Make sure the status of the StakingModule which the committee uses to // make stake decision is unchanged when actual stake transaction is being processed. // //nonce will increase by 1 each time the StakingModule accept stake ethers uint256 onchainNonce = STAKING_ROUTER.getStakingModuleNonce(stakingModuleId); if (nonce != onchainNonce) revert DepositNonceChanged(); //verify the signature is valid _verifySignatures(depositRoot, blockNumber, blockHash, stakingModuleId, nonce, sortedGuardianSignatures); //calls LIDO to distribute ethers to the specified StakingModule LIDO.deposit(maxDepositsPerBlock, stakingModuleId, depositCalldata); } function _verifySignatures( bytes32 depositRoot, uint256 blockNumber, bytes32 blockHash, uint256 stakingModuleId, uint256 nonce, Signature[] memory sigs ) internal view { bytes32 msgHash = keccak256( abi.encodePacked(ATTEST_MESSAGE_PREFIX, blockNumber, blockHash, depositRoot, stakingModuleId, nonce) ); address prevSignerAddr = address(0); for (uint256 i = 0; i < sigs.length; ++i) { address signerAddr = ECDSA.recover(msgHash, sigs[i].r, sigs[i].vs); if (!_isGuardian(signerAddr)) revert InvalidSignature(); if (signerAddr <= prevSignerAddr) revert SignaturesNotSorted(); prevSignerAddr = signerAddr; } }
//lido/lido-dao/contracts/0.8.9/StakingRouter.sol function getStakingModuleNonce(uint256 _stakingModuleId) external view returns (uint256) { return IStakingModule(_getStakingModuleAddressById(_stakingModuleId)).getNonce(); }
//lido/lido-dao/contracts/0.4.24/nos/NodeOperatorsRegistry.sol /// @notice Returns a counter that MUST change it's value when any of the following happens: /// 1. a node operator's deposit data is added /// 2. a node operator's deposit data is removed /// 3. a node operator's ready-to-deposit data size is changed /// 4. a node operator was activated/deactivated /// 5. a node operator's deposit data is used for the deposit function getNonce() external view returns (uint256) { return KEYS_OP_INDEX_POSITION.getStorageUint256(); }

LIDO.deposit

Inside the Lido.deposit :
  • check the msg.senderis registered depositSecurityModule.
  • check whether deposit is allowed based on the bunker state and protocol's pause state
  • calculate the depositable ether( buffered ether - unfinalized ether).
  • calculate the available deposit count of the staking module. Lido uses depositable value and the Min First Allocation Strategy to calculate the deposit count of the staking module under the restriction of the _maxDepositsCount specified by committee. reference
  • update buffered ether and deposited validator in the storage.
  • calls stakingRouter.depositto deposit calculated ethers for the staking module
//lido/lido-dao/contracts/0.4.24/Lido.sol /** * @dev Invokes a deposit call to the Staking Router contract and updates buffered counters * @param _maxDepositsCount max deposits count * @param _stakingModuleId id of the staking module to be deposited * @param _depositCalldata module calldata */ function deposit(uint256 _maxDepositsCount, uint256 _stakingModuleId, bytes _depositCalldata) external { //get LidoLocator contract ILidoLocator locator = getLidoLocator(); //only DepositSecurityModule can call require(msg.sender == locator.depositSecurityModule(), "APP_AUTH_DSM_FAILED"); //require can deposit require(canDeposit(), "CAN_NOT_DEPOSIT"); //get the deposit count based on depositable ether and the status of each Staking Module, based on MinFirstAllocationStrategy IStakingRouter stakingRouter = _stakingRouter(); uint256 depositsCount = Math256.min( _maxDepositsCount, stakingRouter.getStakingModuleMaxDepositsCount(_stakingModuleId, getDepositableEther()) ); uint256 depositsValue; //update data of buffered ether and deposited validators if (depositsCount > 0) { depositsValue = depositsCount.mul(DEPOSIT_SIZE); /// @dev firstly update the local state of the contract to prevent a reentrancy attack, /// even if the StakingRouter is a trusted contract. BUFFERED_ETHER_POSITION.setStorageUint256(_getBufferedEther().sub(depositsValue)); emit Unbuffered(depositsValue); uint256 newDepositedValidators = DEPOSITED_VALIDATORS_POSITION.getStorageUint256().add(depositsCount); DEPOSITED_VALIDATORS_POSITION.setStorageUint256(newDepositedValidators); emit DepositedValidatorsChanged(newDepositedValidators); } /// @dev transfer ether to StakingRouter and make a deposit at the same time. All the ether /// sent to StakingRouter is counted as deposited. If StakingRouter can't deposit all /// passed ether it MUST revert the whole transaction (never happens in normal circumstances) stakingRouter.deposit.value(depositsValue)(depositsCount, _stakingModuleId, _depositCalldata); } /** * @notice Gets authorized oracle address * @return address of oracle contract */ function getLidoLocator() public view returns (ILidoLocator) { return ILidoLocator(LIDO_LOCATOR_POSITION.getStorageAddress()); } /** * @dev Check that Lido allows depositing buffered ether to the consensus layer * Depends on the bunker state and protocol's pause state */ function canDeposit() public view returns (bool) { return !_withdrawalQueue().isBunkerModeActive() && !isStopped(); } /** * @dev Returns depositable ether amount. * Takes into account unfinalized stETH required by WithdrawalQueue */ function getDepositableEther() public view returns (uint256) { uint256 bufferedEther = _getBufferedEther(); uint256 withdrawalReserve = _withdrawalQueue().unfinalizedStETH(); return bufferedEther > withdrawalReserve ? bufferedEther - withdrawalReserve : 0; }

stakingRouter.deposit

Inside the stakingRouter.deposit:
  • check the msg.senderis Lido
  • get withdrawalCredentialswhich contain the address to accept staking rewards. Current withdrawalCredentials is 0x010000000000000000000000b9d7934878b5fb9610b3fe8a5e441e8fad7e293f where 0x01is a prefix to differentiate the type of data, and 0xb9d7934878b5fb9610b3fe8a5e441e8fad7e293f is the contract address on execution layer to accpet the staking withdraws( automatically accept staking rewards regularly without gas fee)
  • check the staking module is active
  • update the local state of the staking module to prevent a reentrancy attack
  • check the msg.value is valid againse _depositsCount
  • obtain deposit batch of public keys and signatures of validators from staking module which is used to deposit on the beacon chain. stakingModule.obtainDepositData will also update the related validators' information in the staking module like whether available validators.
  • deposit to the beacon chain deposit contract
  • check all the ethers have been successfully deposited
/// @dev Invokes a deposit call to the official Deposit contract /// @param _depositsCount number of deposits to make /// @param _stakingModuleId id of the staking module to be deposited /// @param _depositCalldata staking module calldata function deposit( uint256 _depositsCount, uint256 _stakingModuleId, bytes calldata _depositCalldata ) external payable { // check the msg.sender is registered Lido's address if (msg.sender != LIDO_POSITION.getStorageAddress()) revert AppAuthLidoFailed(); // get withdrawl credentials which includes the contract address to accept stake reward and principal bytes32 withdrawalCredentials = getWithdrawalCredentials(); if (withdrawalCredentials == 0) revert EmptyWithdrawalsCredentials(); // check the staking module is active StakingModule storage stakingModule = _getStakingModuleById(_stakingModuleId); if (StakingModuleStatus(stakingModule.status) != StakingModuleStatus.Active) revert StakingModuleNotActive(); /// @dev firstly update the local state of the contract to prevent a reentrancy attack /// even though the staking modules are trusted contracts stakingModule.lastDepositAt = uint64(block.timestamp); stakingModule.lastDepositBlock = block.number; // check the msg.value is valid againse _depositsCount uint256 depositsValue = msg.value; emit StakingRouterETHDeposited(_stakingModuleId, depositsValue); if (depositsValue != _depositsCount * DEPOSIT_SIZE) revert InvalidDepositsValue(getStakingModuleNoncedepositsValue, _depositsCount); if (_depositsCount > 0) { // obtain deposit batch of public keys and signatures of validators from staking module // which is used to deposit on the beacon chain. stakingModule.obtainDepositData will // also update the related validators' information in the stakingModule like // available validators. (bytes memory publicKeysBatch, bytes memory signaturesBatch) = IStakingModule(stakingModule.stakingModuleAddress) .obtainDepositData(_depositsCount, _depositCalldata); uint256 etherBalanceBeforeDeposits = address(this).balance; // deposit to the beacon chain deposit contract _makeBeaconChainDeposits32ETH( _depositsCount, abi.encodePacked(withdrawalCredentials), publicKeysBatch, signaturesBatch ); uint256 etherBalanceAfterDeposits = address(this).balance; /// @dev all sent ETH must be deposited and self balance stay the same assert(etherBalanceBeforeDeposits - etherBalanceAfterDeposits == depositsValue); } } /// @dev Invokes deposit calls to the official Beacon Deposit contract /// @param _keysCount amount of keys to deposit /// @param _withdrawalCredentials Commitment to a public key for withdrawals /// @param _publicKeysBatch A BLS12-381 public keys batch /// @param _signaturesBatch A BLS12-381 signatures batch function _makeBeaconChainDeposits32ETH( uint256 _keysCount, bytes memory _withdrawalCredentials, bytes memory _publicKeysBatch, bytes memory _signaturesBatch ) internal { // check publick key and signature's lengths are valid against keys count and data length. if (_publicKeysBatch.length != PUBLIC_KEY_LENGTH * _keysCount) { revert InvalidPublicKeysBatchLength(_publicKeysBatch.length, PUBLIC_KEY_LENGTH * _keysCount); } if (_signaturesBatch.length != SIGNATURE_LENGTH * _keysCount) { revert InvalidSignaturesBatchLength(_signaturesBatch.length, SIGNATURE_LENGTH * _keysCount); } // allocate buffer memory to store public key and signature bytes memory publicKey = MemUtils.unsafeAllocateBytes(PUBLIC_KEY_LENGTH); bytes memory signature = MemUtils.unsafeAllocateBytes(SIGNATURE_LENGTH); for (uint256 i; i < _keysCount;) { // decode public key and signature from batch data MemUtils.copyBytes(_publicKeysBatch, publicKey, i * PUBLIC_KEY_LENGTH, 0, PUBLIC_KEY_LENGTH); MemUtils.copyBytes(_signaturesBatch, signature, i * SIGNATURE_LENGTH, 0, SIGNATURE_LENGTH); // call beacon chain deposit contract to deposit ethers DEPOSIT_CONTRACT.deposit{value: DEPOSIT_SIZE}( publicKey, _withdrawalCredentials, signature, _computeDepositDataRoot(_withdrawalCredentials, publicKey, signature) ); unchecked { ++i; } } }
 
NodeOperatorsRegistry is a staking module registers Node Operators selected by the Lido DAO. Each node operator has a struct variable NodeOperator which records the detailed information of it:
  • signingKeysStats: a packed data structure storing various statistics about a node operator's signing keys, including the exitedSigningKeysCount, depositedSigningKeysCount, totalSigningKeysCount, vettedSigningKeysCount . It provides a comprehensive overview of the status and history of a node operator's keys within the registry.
    • Note that totalSigningKeysCount is the total number of signing keys that a node operator has added to the registry, regardless of whether they have been vetted or approved for deposit. It represents the cumulative sum of all keys a node operator intends to use or has used for validator operations. And vettedSigningKeysCount represents the maximum number of validator keys approved for deposit by the DAO for a specific node operator. It's a subset of the total number of keys a node operator has submitted to the registry, indicating how many of those keys are approved for use in validator operations. It is dynamically managed based on the operational needs and decisions of the DAO regarding the node operator's performance, security considerations, or other factors.
  • stuckPenaltyStats : a structure records information related to penalties imposed on node operators for keys that get "stuck" or fail to perform as expected (e.g., not participating in consensus due to technical issues). It includes counts of stuck keys, refunded keys, and timestamps related to the imposition and duration of penalties. It's crucial for managing the operational integrity of node operators and ensuring they meet performance expectations.
  • targetValidatorsStats: controlled by Lido Dao, captures the operational targets set for a node operator, such as limits on the active number of validators they can manage. It includes flags for enabling/disabling these limits and actual numeric targets. This mechanism allows for dynamic adjustment of a node operator's contributions to the network based on performance, security needs, or network growth strategies.
/// @dev Node Operator parameters and internal state struct NodeOperator { /// @dev Flag indicating if the operator can participate in further staking and reward distribution bool active; /// @dev Ethereum address on Execution Layer which receives stETH rewards for this operator address rewardAddress; /// @dev Human-readable name string name; /// @dev The below variables store the signing keys info of the node operator. /// signingKeysStats - contains packed variables: uint64 exitedSigningKeysCount, uint64 depositedSigningKeysCount, /// uint64 vettedSigningKeysCount, uint64 totalSigningKeysCount /// /// These variables can take values in the following ranges: /// /// 0 <= exitedSigningKeysCount <= depositedSigningKeysCount /// exitedSigningKeysCount <= depositedSigningKeysCount <= vettedSigningKeysCount /// depositedSigningKeysCount <= vettedSigningKeysCount <= totalSigningKeysCount /// depositedSigningKeysCount <= totalSigningKeysCount <= UINT64_MAX /// /// Additionally, the exitedSigningKeysCount and depositedSigningKeysCount values are monotonically increasing: /// : : : : : /// [....exitedSigningKeysCount....]-------->: : : /// [....depositedSigningKeysCount :.........]-------->: : /// [....vettedSigningKeysCount....:.........:<--------]-------->: /// [....totalSigningKeysCount.....:.........:<--------:---------]-------> /// : : : : : Packed64x4.Packed signingKeysStats; Packed64x4.Packed stuckPenaltyStats; Packed64x4.Packed targetValidatorsStats; } /// @dev Mapping of all node operators. Mapping is used to be able to extend the struct. mapping(uint256 => NodeOperator) internal _nodeOperators;
 
Inside the NodeOperatorsRegistry.obtainDepositData:
  • check the msg.sender has STAKING_ROUTER_ROLE .
  • check _depositsCount is not zero.
  • use Min First Allocation Strategy to allocate deposit counts to node operators.
  • require allocated deposit count matchs the intended deposit count.
  • get public keys and signatures from selected node operators, and update the validator usage information of related node operators.
  • increase validator keys nonce.
Note inside the _getSigningKeysAllocationData function, it uses maxSigningKeysCount rather than vettedSigningKeysCount to calculate node operator capacity. This is due to the potential stuck penalty of node operator. Although the Lido Dao has authorize vettedSigningKeysCount to node operators, node opeators may fail to fulfill their responsibility. So there is a penalty mechanism to penalize those validators. And Lido uses the maxSigningKeysCount in the targetValidatorsStats to control the max capacity of node operators.
/// lido/lido-dao/contracts/0.4.24/nos/NodeOperatorsRegistry.sol /// @notice Obtains deposit data to be used by StakingRouter to deposit to the Ethereum Deposit /// contract /// @param _depositsCount Number of deposits to be done /// @return publicKeys Batch of the concatenated public validators keys /// @return signatures Batch of the concatenated deposit signatures for returned public keys function obtainDepositData( uint256 _depositsCount, bytes /* _depositCalldata */ ) external returns (bytes memory publicKeys, bytes memory signatures) { // check the msg.sender has STAKING_ROUTER_ROLE _auth(STAKING_ROUTER_ROLE); // check _depositsCount is not zero if (_depositsCount == 0) return (new bytes(0), new bytes(0)); // use Min First Allocation Strategy to allocate deposit counts to node operators ( uint256 allocatedKeysCount, uint256[] memory nodeOperatorIds, uint256[] memory activeKeysCountAfterAllocation ) = _getSigningKeysAllocationData(_depositsCount); // require allocated deposit count matchs the intended deposit count require(allocatedKeysCount == _depositsCount, "INVALID_ALLOCATED_KEYS_COUNT"); // get public keys and signatures, update the validator usage information of related node operators. (publicKeys, signatures) = _loadAllocatedSigningKeys( allocatedKeysCount, nodeOperatorIds, activeKeysCountAfterAllocation ); // increase validator keys nonce _increaseValidatorsKeysNonce(); } // use MIN FIRST ALLOCATION STRATEGY to allocate deposit count to node operators function _getSigningKeysAllocationData(uint256 _keysCount) internal view returns (uint256 allocatedKeysCount, uint256[] memory nodeOperatorIds, uint256[] memory activeKeyCountsAfterAllocation) { uint256 activeNodeOperatorsCount = getActiveNodeOperatorsCount(); nodeOperatorIds = new uint256[](activeNodeOperatorsCount); activeKeyCountsAfterAllocation = new uint256[](activeNodeOperatorsCount); uint256[] memory activeKeysCapacities = new uint256[](activeNodeOperatorsCount); uint256 activeNodeOperatorIndex; uint256 nodeOperatorsCount = getNodeOperatorsCount(); uint256 maxSigningKeysCount; uint256 depositedSigningKeysCount; uint256 exitedSigningKeysCount; // fetch the current allocation and capacity of each node operator for (uint256 nodeOperatorId; nodeOperatorId < nodeOperatorsCount; ++nodeOperatorId) { (exitedSigningKeysCount, depositedSigningKeysCount, maxSigningKeysCount) = _getNodeOperator(nodeOperatorId); // the node operator has no available signing keys if (depositedSigningKeysCount == maxSigningKeysCount) continue; nodeOperatorIds[activeNodeOperatorIndex] = nodeOperatorId; activeKeyCountsAfterAllocation[activeNodeOperatorIndex] = depositedSigningKeysCount - exitedSigningKeysCount; activeKeysCapacities[activeNodeOperatorIndex] = maxSigningKeysCount - exitedSigningKeysCount; ++activeNodeOperatorIndex; } // if there are no node operator able to accept stake, just return. obtainDepositData function // will revert on condition "allocatedKeysCount == _depositsCount". if (activeNodeOperatorIndex == 0) return (0, new uint256[](0), new uint256[](0)); /// @dev shrink the length of the resulting arrays if some active node operators have no available keys to be deposited if (activeNodeOperatorIndex < activeNodeOperatorsCount) { assembly { mstore(nodeOperatorIds, activeNodeOperatorIndex) mstore(activeKeyCountsAfterAllocation, activeNodeOperatorIndex) mstore(activeKeysCapacities, activeNodeOperatorIndex) } } // allocate deposit count to node operators allocatedKeysCount = MinFirstAllocationStrategy.allocate(activeKeyCountsAfterAllocation, activeKeysCapacities, _keysCount); /// @dev method NEVER allocates more keys than was requested assert(_keysCount >= allocatedKeysCount); } function _increaseValidatorsKeysNonce() internal { uint256 keysOpIndex = KEYS_OP_INDEX_POSITION.getStorageUint256() + 1; KEYS_OP_INDEX_POSITION.setStorageUint256(keysOpIndex); /// @dev [DEPRECATED] event preserved for tooling compatibility emit KeysOpIndexSet(keysOpIndex); emit NonceChanged(keysOpIndex); } // get exitedSigningKeysCount, depositedSigningKeysCount and maxSigningKeysCount of the node operator function _getNodeOperator(uint256 _nodeOperatorId) internal view returns (uint256 exitedSigningKeysCount, uint256 depositedSigningKeysCount, uint256 maxSigningKeysCount) { Packed64x4.Packed memory signingKeysStats = _loadOperatorSigningKeysStats(_nodeOperatorId); Packed64x4.Packed memory operatorTargetStats = _loadOperatorTargetValidatorsStats(_nodeOperatorId); exitedSigningKeysCount = signingKeysStats.get(TOTAL_EXITED_KEYS_COUNT_OFFSET); depositedSigningKeysCount = signingKeysStats.get(TOTAL_DEPOSITED_KEYS_COUNT_OFFSET); maxSigningKeysCount = operatorTargetStats.get(MAX_VALIDATORS_COUNT_OFFSET); // Validate data boundaries invariants here to not use SafeMath in caller methods assert(maxSigningKeysCount >= depositedSigningKeysCount && depositedSigningKeysCount >= exitedSigningKeysCount); }
 
_loadAllocatedSigningKeys is used to fetch batch of public keys and signatures of validators of specified node operators:
  • initialize buffer memory to store feteched public keys and signatures of validators
  • initialize variable to record loaded keys count
  • get the signing keys status of the node operator
  • fetch and calculate the before and after deposited signing keys count
  • if there is no capacity to accept deposit, just return
  • calculate the to-load signing keys count
  • load public keys and signatures of keysCount validators of the node operator
  • cumulate loadedKeysCount
  • update total deposited keys count of the node operator
  • update the max validator count of the operator and the summary because the penalty status of operator may have changed
  • check loaded keys count == keys count intended to load
  • update deposited keys count in the summary data
// lido/lido-dao/contracts/0.4.24/nos/NodeOperatorsRegistry.sol function _loadAllocatedSigningKeys( uint256 _keysCountToLoad, uint256[] memory _nodeOperatorIds, uint256[] memory _activeKeyCountsAfterAllocation ) internal returns (bytes memory pubkeys, bytes memory signatures) { // initialize buffer memory to store feteched public keys and signatures of validators (pubkeys, signatures) = SigningKeys.initKeysSigsBuf(_keysCountToLoad); // initialize variable to record loaded keys count uint256 loadedKeysCount = 0; uint256 depositedSigningKeysCountBefore; uint256 depositedSigningKeysCountAfter; uint256 keysCount; Packed64x4.Packed memory signingKeysStats; for (uint256 i; i < _nodeOperatorIds.length; ++i) { // get the signing keys status of the node operator signingKeysStats = _loadOperatorSigningKeysStats(_nodeOperatorIds[i]); // fetch and calculate the before and after deposited signing keys count depositedSigningKeysCountBefore = signingKeysStats.get(TOTAL_DEPOSITED_KEYS_COUNT_OFFSET); depositedSigningKeysCountAfter = signingKeysStats.get(TOTAL_EXITED_KEYS_COUNT_OFFSET) + _activeKeyCountsAfterAllocation[i]; // no capacity to accept deposit, just return if (depositedSigningKeysCountAfter == depositedSigningKeysCountBefore) continue; // For gas savings SafeMath.add() wasn't used on depositedSigningKeysCountAfter // calculation, so below we check that operation finished without overflow // In case of overflow: // depositedSigningKeysCountAfter < signingKeysStats.get(TOTAL_EXITED_KEYS_COUNT_OFFSET) // what violates invariant: // depositedSigningKeysCount >= exitedSigningKeysCount assert(depositedSigningKeysCountAfter > depositedSigningKeysCountBefore); // calculate the to-load signing keys count keysCount = depositedSigningKeysCountAfter - depositedSigningKeysCountBefore; // load public keys and signatures of keysCount validators of the node operator SIGNING_KEYS_MAPPING_NAME.loadKeysSigs( _nodeOperatorIds[i], depositedSigningKeysCountBefore, keysCount, pubkeys, signatures, loadedKeysCount ); // cumulate loadedKeysCount loadedKeysCount += keysCount; emit DepositedSigningKeysCountChanged(_nodeOperatorIds[i], depositedSigningKeysCountAfter); // update total deposited keys count of the node operator signingKeysStats.set(TOTAL_DEPOSITED_KEYS_COUNT_OFFSET, depositedSigningKeysCountAfter); _saveOperatorSigningKeysStats(_nodeOperatorIds[i], signingKeysStats); // update the max validator count of the operator because the penalty status may have changed _updateSummaryMaxValidatorsCount(_nodeOperatorIds[i]); } // check loaded keys count == keys count intended to load assert(loadedKeysCount == _keysCountToLoad); // update deposited keys count in the summary Packed64x4.Packed memory summarySigningKeysStats = _loadSummarySigningKeysStats(); summarySigningKeysStats.add(SUMMARY_DEPOSITED_KEYS_COUNT_OFFSET, loadedKeysCount); _saveSummarySigningKeysStats(summarySigningKeysStats); }
 
_updateSummaryMaxValidatorsCount is to Recalculate and update the max validator count for operator and summary stats based on penalty status of the node operator.
  • If the operator is under penalty, then assign no deposit capacity.
  • Else if there is limit on the capacity, then use the TOTAL_EXITED_KEYS_COUNT + TARGET_VALIDATORS_COUNT as the capacity. This is to make the capacity of active validators equal the TARGET_VALIDATORS_COUNT.
  • If there is no penalty and limit, then use the TOTAL_VETTED_KEYS_COUNT as capacity.
// lido/lido-dao/contracts/0.4.24/nos/NodeOperatorsRegistry.sol // @dev Recalculate and update the max validator count for operator and summary stats function _updateSummaryMaxValidatorsCount(uint256 _nodeOperatorId) internal { // calculate the old and new Max Signing Keys Count of the node operator based on the penalty status (uint256 oldMaxSigningKeysCount, uint256 newMaxSigningKeysCount) = _applyNodeOperatorLimits(_nodeOperatorId); // if there is no change, then just return if (newMaxSigningKeysCount == oldMaxSigningKeysCount) return; // load the summary Signing Keys Stats Packed64x4.Packed memory summarySigningKeysStats = _loadSummarySigningKeysStats(); // update the SUMMARY_MAX_VALIDATORS_COUNT in the summarySigningKeysStats uint256 maxSigningKeysCountAbsDiff = Math256.absDiff(newMaxSigningKeysCount, oldMaxSigningKeysCount); if (newMaxSigningKeysCount > oldMaxSigningKeysCount) { summarySigningKeysStats.add(SUMMARY_MAX_VALIDATORS_COUNT_OFFSET, maxSigningKeysCountAbsDiff); } else { summarySigningKeysStats.sub(SUMMARY_MAX_VALIDATORS_COUNT_OFFSET, maxSigningKeysCountAbsDiff); } _saveSummarySigningKeysStats(summarySigningKeysStats); } // update MAX_VALIDATORS_COUNT of node operators function _applyNodeOperatorLimits(uint256 _nodeOperatorId) internal returns (uint256 oldMaxSigningKeysCount, uint256 newMaxSigningKeysCount) { // load status Packed64x4.Packed memory signingKeysStats = _loadOperatorSigningKeysStats(_nodeOperatorId); Packed64x4.Packed memory operatorTargetStats = _loadOperatorTargetValidatorsStats(_nodeOperatorId); uint256 depositedSigningKeysCount = signingKeysStats.get(TOTAL_DEPOSITED_KEYS_COUNT_OFFSET); // It's expected that validators don't suffer from penalties most of the time, // so optimistically, set the count of max validators equal to the vetted validators count. newMaxSigningKeysCount = signingKeysStats.get(TOTAL_VETTED_KEYS_COUNT_OFFSET); if (!isOperatorPenaltyCleared(_nodeOperatorId)) { // when the node operator is penalized zeroing its depositable validators count // remain capacity = newMaxSigningKeysCount - depositedSigningKeysCount = 0 newMaxSigningKeysCount = depositedSigningKeysCount; // if the flag is not zero, then there exist limit } else if (operatorTargetStats.get(IS_TARGET_LIMIT_ACTIVE_OFFSET) != 0) { // apply target limit when it's active and the node operator is not penalized. // use TARGET_VALIDATORS_COUNT decided by Lido as MaxSigningKeysCount of the operator(with other restrictions). // TARGET_VALIDATORS_COUNT means the target active validators, so the result should include TOTAL_EXITED_KEYS_COUNT newMaxSigningKeysCount = Math256.max( // max validators count can't be less than the deposited validators count // even when the target limit is less than the current active validators count depositedSigningKeysCount, Math256.min( // max validators count can't be greater than the vetted validators count newMaxSigningKeysCount, // SafeMath.add() isn't used below because the sum is always // less or equal to 2 * UINT64_MAX signingKeysStats.get(TOTAL_EXITED_KEYS_COUNT_OFFSET) + operatorTargetStats.get(TARGET_VALIDATORS_COUNT_OFFSET) ) ); } // update MaxSigningKeysCount in the operatorTargetStats of the operator oldMaxSigningKeysCount = operatorTargetStats.get(MAX_VALIDATORS_COUNT_OFFSET); if (oldMaxSigningKeysCount != newMaxSigningKeysCount) { operatorTargetStats.set(MAX_VALIDATORS_COUNT_OFFSET, newMaxSigningKeysCount); _saveOperatorTargetValidatorsStats(_nodeOperatorId, operatorTargetStats); } }
 
The operator is under penalty if:
  • REFUNDED_VALIDATORS_COUNT is smaller than the STUCK_VALIDATORS_COUNT or current block time is smaller than STUCK_PENALTY_END_TIMESTAMP ,
  • and STUCK_PENALTY_END_TIMESTAMP is not zero.
function isOperatorPenaltyCleared(uint256 _nodeOperatorId) public view returns (bool) { Packed64x4.Packed memory stuckPenaltyStats = _loadOperatorStuckPenaltyStats(_nodeOperatorId); return !_isOperatorPenalized(stuckPenaltyStats) && stuckPenaltyStats.get(STUCK_PENALTY_END_TIMESTAMP_OFFSET) == 0; } function _isOperatorPenalized(Packed64x4.Packed memory stuckPenaltyStats) internal view returns (bool) { return stuckPenaltyStats.get(REFUNDED_VALIDATORS_COUNT_OFFSET) < stuckPenaltyStats.get(STUCK_VALIDATORS_COUNT_OFFSET) || block.timestamp <= stuckPenaltyStats.get(STUCK_PENALTY_END_TIMESTAMP_OFFSET); }

Appendix

Lido.sol initialization

It’s needed to initialize pooled ether and shares ratio when deploy the Lido.sol. Cause in the getSharesByPooledEth and getPooledEthByShares, the denominator is pooled ethers and shares respectively. So we need to initialize them, this is implemeted in the _bootstrapInitialHolder, which assign the amount of ether balance of the Lido.sol to the pooled ethers and shares. Note that balancenumber doesn’t matter.
//lido/lido-dao/contracts/0.4.24/Lido.sol function initialize(address _lidoLocator, address _eip712StETH) public payable onlyInit { _bootstrapInitialHolder(); _initialize_v2(_lidoLocator, _eip712StETH); initialized(); } function _bootstrapInitialHolder() internal { uint256 balance = address(this).balance; assert(balance != 0); if (_getTotalShares() == 0) { // if protocol is empty bootstrap it with the contract's balance // address(0xdead) is a holder for initial shares _setBufferedEther(balance); // emitting `Submitted` before Transfer events to preserver events order in tx emit Submitted(INITIAL_TOKEN_HOLDER, balance, 0); _mintInitialShares(balance); } }

Beacon Chain Deposit

User can deposit 32 ETH by calling deposit function.
  • pubkey: This is a BLS12-381 public key. It is used to identify the validator in the Beacon Chain. Validators perform duties such as proposing and attesting to blocks to secure the network, and this public key is essential for the protocol to attribute those actions to the correct validator.
  • withdrawal_credentials: The credentials commit to a public key for withdrawals, ensuring that only the entity in possession of the corresponding private key can access the funds.
  • signature: A BLS12-381 signature, which is used to verify that the depositor owns the private key corresponding to the provided public key (pubkey).
  • deposit_data_root: The hash of the deposit data. This is used to verify that the deposit data has not been tampered with to enhance security. For example, if the withdrawal_credentials has been modified, then the staked ethers may lost forever.
// DepositContract Ethereum mainnet 0x00000000219ab540356cBB839Cbe05303d7705Fa function deposit( bytes calldata pubkey, bytes calldata withdrawal_credentials, bytes calldata signature, bytes32 deposit_data_root ) override external payable { // Extended ABI length checks since dynamic types are used. require(pubkey.length == 48, "DepositContract: invalid pubkey length"); require(withdrawal_credentials.length == 32, "DepositContract: invalid withdrawal_credentials length"); require(signature.length == 96, "DepositContract: invalid signature length"); // Check deposit amount require(msg.value >= 1 ether, "DepositContract: deposit value too low"); require(msg.value % 1 gwei == 0, "DepositContract: deposit value not multiple of gwei"); uint deposit_amount = msg.value / 1 gwei; require(deposit_amount <= type(uint64).max, "DepositContract: deposit value too high"); // Emit `DepositEvent` log bytes memory amount = to_little_endian_64(uint64(deposit_amount)); emit DepositEvent( pubkey, withdrawal_credentials, amount, signature, to_little_endian_64(uint64(deposit_count)) ); // Compute deposit data root (`DepositData` hash tree root) bytes32 pubkey_root = sha256(abi.encodePacked(pubkey, bytes16(0))); bytes32 signature_root = sha256(abi.encodePacked( sha256(abi.encodePacked(signature[:64])), sha256(abi.encodePacked(signature[64:], bytes32(0))) )); bytes32 node = sha256(abi.encodePacked( sha256(abi.encodePacked(pubkey_root, withdrawal_credentials)), sha256(abi.encodePacked(amount, bytes24(0), signature_root)) )); // Verify computed and expected deposit data roots match require(node == deposit_data_root, "DepositContract: reconstructed DepositData does not match supplied deposit_data_root"); // Avoid overflowing the Merkle tree (and prevent edge case in computing `branch`) require(deposit_count < MAX_DEPOSIT_COUNT, "DepositContract: merkle tree full"); // Add deposit data root to Merkle tree (update a single `branch` node) deposit_count += 1; uint size = deposit_count; for (uint height = 0; height < DEPOSIT_CONTRACT_TREE_DEPTH; height++) { if ((size & 1) == 1) { branch[height] = node; return; } node = sha256(abi.encodePacked(branch[height], node)); size /= 2; } // As the loop should always end prematurely with the `return` statement, // this code should be unreachable. We assert `false` just to be safe. assert(false); }
 

Calculation of depositable count for staking module

Although off-chain stake decision maker has choosed a Staking Module to deposit ether, the final deposit count of this Staking Module is decided by on-chain algorithm named Min First Allocation Strategy. The aim of this deposit allocation strategy is to fill from the least populated buckets to equalize the fill factor.
//lido/lido-dao/contracts/0.8.9/StakingRouter.sol /// @dev calculate the max count of deposits which the staking module can provide data for based /// on the passed `_maxDepositsValue` amount /// @param _stakingModuleId id of the staking module to be deposited /// @param _maxDepositsValue max amount of ether that might be used for deposits count calculation /// @return max number of deposits might be done using the given staking module function getStakingModuleMaxDepositsCount(uint256 _stakingModuleId, uint256 _maxDepositsValue) public view returns (uint256) { //calcualte new deposit allocation based on: //1.status of all staking modules //2.deposit count //3.the Min First Allocation Strategy ( /* uint256 allocated */, uint256[] memory newDepositsAllocation, StakingModuleCache[] memory stakingModulesCache ) = _getDepositsAllocation(_maxDepositsValue / DEPOSIT_SIZE); uint256 stakingModuleIndex = _getStakingModuleIndexById(_stakingModuleId); //calculate and return the depositable count of the staking module return newDepositsAllocation[stakingModuleIndex] - stakingModulesCache[stakingModuleIndex].activeValidatorsCount; } function _getDepositsAllocation( uint256 _depositsToAllocate ) internal view returns (uint256 allocated, uint256[] memory allocations, StakingModuleCache[] memory stakingModulesCache) { // calculate total used validators for operators // cumulate total active validators including the new validators to calcualte the target validator count of each staking module // which is used to help calcualte the capacity of each staking module. uint256 totalActiveValidators; //get current total activate validator counts and status of each staking module (totalActiveValidators, stakingModulesCache) = _loadStakingModulesCache(); uint256 stakingModulesCount = stakingModulesCache.length; // initialize allocations to store current allocation status allocations = new uint256[](stakingModulesCount); if (stakingModulesCount > 0) { /// @dev new estimated active validators count totalActiveValidators += _depositsToAllocate; // initialize capacities to store capacity of each staking module considering the new depositable count uint256[] memory capacities = new uint256[](stakingModulesCount); uint256 targetValidators; for (uint256 i; i < stakingModulesCount; ) { allocations[i] = stakingModulesCache[i].activeValidatorsCount; targetValidators = (stakingModulesCache[i].targetShare * totalActiveValidators) / TOTAL_BASIS_POINTS; // calculate the capacity of each staking module under the restriction of target shares capacities[i] = Math256.min(targetValidators, stakingModulesCache[i].activeValidatorsCount + stakingModulesCache[i].availableValidatorsCount); unchecked { ++i; } } //use MinFirstAllocationStrategy to calculate the after allocation of each staking module allocated = MinFirstAllocationStrategy.allocate(allocations, capacities, _depositsToAllocate); } } /// @dev load modules into a memory cache /// /// @return totalActiveValidators total active validators across all modules /// @return stakingModulesCache array of StakingModuleCache structs function _loadStakingModulesCache() internal view returns ( uint256 totalActiveValidators, StakingModuleCache[] memory stakingModulesCache ) { uint256 stakingModulesCount = getStakingModulesCount(); stakingModulesCache = new StakingModuleCache[](stakingModulesCount); for (uint256 i; i < stakingModulesCount; ) { stakingModulesCache[i] = _loadStakingModulesCacheItem(i); //cumulate each stakingModule's active validator amount totalActiveValidators += stakingModulesCache[i].activeValidatorsCount; unchecked { ++i; } } } struct StakingModule { /// @notice unique id of the staking module uint24 id; /// @notice address of staking module address stakingModuleAddress; /// @notice part of the fee taken from staking rewards that goes to the staking module uint16 stakingModuleFee; /// @notice part of the fee taken from staking rewards that goes to the treasury uint16 treasuryFee; /// @notice target percent of total validators in protocol, in BP uint16 targetShare; /// @notice staking module status if staking module can not accept the deposits or can participate in further reward distribution uint8 status; /// @notice name of staking module string name; /// @notice block.timestamp of the last deposit of the staking module /// @dev NB: lastDepositAt gets updated even if the deposit value was 0 and no actual deposit happened uint64 lastDepositAt; /// @notice block.number of the last deposit of the staking module /// @dev NB: lastDepositBlock gets updated even if the deposit value was 0 and no actual deposit happened uint256 lastDepositBlock; /// @notice number of exited validators uint256 exitedValidatorsCount; } //get staking module's status function _loadStakingModulesCacheItem(uint256 _stakingModuleIndex) internal view returns (StakingModuleCache memory cacheItem) { StakingModule storage stakingModuleData = _getStakingModuleByIndex(_stakingModuleIndex); cacheItem.stakingModuleAddress = stakingModuleData.stakingModuleAddress; cacheItem.stakingModuleId = stakingModuleData.id; cacheItem.stakingModuleFee = stakingModuleData.stakingModuleFee; cacheItem.treasuryFee = stakingModuleData.treasuryFee; cacheItem.targetShare = stakingModuleData.targetShare; cacheItem.status = StakingModuleStatus(stakingModuleData.status); ( uint256 totalExitedValidators, uint256 totalDepositedValidators, uint256 depositableValidatorsCount ) = IStakingModule(cacheItem.stakingModuleAddress).getStakingModuleSummary(); cacheItem.availableValidatorsCount = cacheItem.status == StakingModuleStatus.Active ? depositableValidatorsCount : 0; // the module might not receive all exited validators data yet => we need to replacing // the exitedValidatorsCount with the one that the staking router is aware of // because staking router and staking module's data are reported asynchronously cacheItem.activeValidatorsCount = totalDepositedValidators - Math256.max(totalExitedValidators, stakingModuleData.exitedValidatorsCount); } function _getStakingModuleByIndex(uint256 _stakingModuleIndex) internal view returns (StakingModule storage) { mapping(uint256 => StakingModule) storage _stakingModules = _getStorageStakingModulesMapping(); return _stakingModules[_stakingModuleIndex]; }
 

Min First Allocation Strategy

The picture below shows an typical allocation process. Basically, the algorithm will choose the least filled bucket as the best candiate to accept deposit. Also the final allocation is restricted by the capacity of bucket and the smallest larger allocation than the found best candidate.
notion image
//lido/lido-dao/contracts/common/lib/MinFirstAllocationStrategy.sol /// @notice Allocates passed maxAllocationSize among the buckets. The resulting allocation doesn't exceed the /// capacities of the buckets. An algorithm starts filling from the least populated buckets to equalize the fill factor. /// For example, for buckets: [9998, 70, 0], capacities: [10000, 101, 100], and maxAllocationSize: 101, the allocation happens /// following way: /// 1. top up the bucket with index 2 on 70. Intermediate state of the buckets: [9998, 70, 70]. According to the definition, /// the rest allocation must be proportionally split among the buckets with the same values. /// 2. top up the bucket with index 1 on 15. Intermediate state of the buckets: [9998, 85, 70]. /// 3. top up the bucket with index 2 on 15. Intermediate state of the buckets: [9998, 85, 85]. /// 4. top up the bucket with index 1 on 1. Nothing to distribute. The final state of the buckets: [9998, 86, 85] /// @dev Method modifies the passed buckets array to reduce the gas costs on memory allocation. /// @param buckets The array of current allocations in the buckets /// @param capacities The array of capacities of the buckets /// @param allocationSize The desired value to allocate among the buckets /// @return allocated The total value allocated among the buckets. Can't exceed the allocationSize value function allocate( uint256[] memory buckets, uint256[] memory capacities, uint256 allocationSize ) internal pure returns (uint256 allocated) { uint256 allocatedToBestCandidate = 0; //iterate the algorithm to allocate depositable count to best candiate until there is no remaining depositable count. while (allocated < allocationSize) { allocatedToBestCandidate = allocateToBestCandidate(buckets, capacities, allocationSize - allocated); // if allocatedToBestCandidate is zero, then all the depositable count has been allocated. // The buckets memory variable has been updated completedly. And we can end the loop. if (allocatedToBestCandidate == 0) { break; } allocated += allocatedToBestCandidate; } } /// @notice Allocates the max allowed value not exceeding allocationSize to the bucket with the least value. /// The candidate search happens according to the following algorithm: /// 1. Find the first least filled bucket which has free space. Count the number of such buckets. /// 2. If no buckets are found terminate the search - no free buckets /// 3. Find the first bucket with free space, which has the least value greater /// than the bucket found in step 1. To preserve proportional allocation the resulting allocation can't exceed this value. /// 4. Calculate the allocation size as: /// min( /// (count of least filling buckets > 1 ? ceilDiv(allocationSize, count of least filling buckets) : allocationSize), /// fill factor of the bucket found in step 3, /// free space of the least filled bucket /// ) /// @dev Method modifies the passed buckets array to reduce the gas costs on memory allocation. /// @param buckets The array of current allocations in the buckets /// @param capacities The array of capacities of the buckets /// @param allocationSize The desired value to allocate to the bucket /// @return allocated The total value allocated to the bucket. Can't exceed the allocationSize value function allocateToBestCandidate( uint256[] memory buckets, uint256[] memory capacities, uint256 allocationSize ) internal pure returns (uint256 allocated) { // used to store the index of the least filled staking module uint256 bestCandidateIndex = buckets.length; // used to store the current allocation of the least filled staking module uint256 bestCandidateAllocation = MAX_UINT256; // there may be multiple least filled modules which are all the best candidates. // then the depositable count will be divided equally between them. uint256 bestCandidatesCount = 0; if (allocationSize == 0) { return 0; } for (uint256 i = 0; i < buckets.length; ++i) { // staking module whose capacity has been run out cant accpet deposit if (buckets[i] >= capacities[i]) { continue; // compare to get the less allocated staking module } else if (bestCandidateAllocation > buckets[i]) { // update information of best candidate bestCandidateIndex = i; bestCandidatesCount = 1; bestCandidateAllocation = buckets[i]; } else if (bestCandidateAllocation == buckets[i]) { // if there is staking module with same allocation of the current best candiate, // then increase the best candidates count bestCandidatesCount += 1; } } // bestCandidatesCount is zero, meaning there is no staking module can accpet deposit, just return without any modification on the allocation. if (bestCandidatesCount == 0) return 0; } // cap the allocation by the smallest larger allocation than the found best one uint256 allocationSizeUpperBound = MAX_UINT256; for (uint256 j = 0; j < buckets.length; ++j) { if (buckets[j] >= capacities[j]) { continue; } else if (buckets[j] > bestCandidateAllocation && buckets[j] < allocationSizeUpperBound) { allocationSizeUpperBound = buckets[j]; } } // calculate the allocated deposit count. allocated = Math256.min( bestCandidatesCount > 1 ? Math256.ceilDiv(allocationSize, bestCandidatesCount) : allocationSize, Math256.min(allocationSizeUpperBound, capacities[bestCandidateIndex]) - bestCandidateAllocation ); buckets[bestCandidateIndex] += allocated; }

Stuck penalty

Operator may fail to fulfill their duty, for example, their validator may be stuck for some reason and can’t fulfill the exit request issued by Lido. Lido has a penalty mechanism to penalize those misbehavior. Basically oracle will update the stuck and refunded validator counts of staking modules and check whether they should be penalized or not. Penalty will cause validator can’t accept staking reward or deposit count.
Inside the StakingRouter, there are methods reportStakingModuleStuckValidatorsCountByNodeOperator and updateRefundedValidatorsCount called by manager to update stuck and refund status of operators.
/// lido/lido-dao/contracts/0.8.9/StakingRouter.sol /// @notice Updates stuck validators counts per node operator for the staking module with /// the specified id. /// /// See the docs for `updateExitedValidatorsCountByStakingModule` for the description of the /// overall update process. /// /// @param _stakingModuleId The id of the staking modules to be updated. /// @param _nodeOperatorIds Ids of the node operators to be updated. /// @param _stuckValidatorsCounts New counts of stuck validators for the specified node operators. /// function reportStakingModuleStuckValidatorsCountByNodeOperator( uint256 _stakingModuleId, bytes calldata _nodeOperatorIds, bytes calldata _stuckValidatorsCounts ) external onlyRole(REPORT_EXITED_VALIDATORS_ROLE) { address moduleAddr = _getStakingModuleById(_stakingModuleId).stakingModuleAddress; _checkValidatorsByNodeOperatorReportData(_nodeOperatorIds, _stuckValidatorsCounts); IStakingModule(moduleAddr).updateStuckValidatorsCount(_nodeOperatorIds, _stuckValidatorsCounts); } /// @notice Updates the number of the refunded validators in the staking module with the given /// node operator id /// @param _stakingModuleId Id of the staking module /// @param _nodeOperatorId Id of the node operator /// @param _refundedValidatorsCount New number of refunded validators of the node operator function updateRefundedValidatorsCount( uint256 _stakingModuleId, uint256 _nodeOperatorId, uint256 _refundedValidatorsCount ) external onlyRole(STAKING_MODULE_MANAGE_ROLE) { address moduleAddr = _getStakingModuleById(_stakingModuleId).stakingModuleAddress; IStakingModule(moduleAddr) .updateRefundedValidatorsCount(_nodeOperatorId, _refundedValidatorsCount); }
 
Inside the StakingModule, it will update stuck and refund status of each operator, check and update the stuckPenaltyStats.
/// @notice Called by StakingRouter to update the number of the validators of the given node /// operator that were requested to exit but failed to do so in the max allowed time /// /// @param _nodeOperatorIds bytes packed array of the node operators id /// @param _stuckValidatorsCounts bytes packed array of the new number of stuck validators for the node operators function updateStuckValidatorsCount(bytes _nodeOperatorIds, bytes _stuckValidatorsCounts) external { _auth(STAKING_ROUTER_ROLE); uint256 nodeOperatorsCount = _checkReportPayload(_nodeOperatorIds.length, _stuckValidatorsCounts.length); uint256 totalNodeOperatorsCount = getNodeOperatorsCount(); uint256 nodeOperatorId; uint256 validatorsCount; uint256 _nodeOperatorIdsOffset; uint256 _stuckValidatorsCountsOffset; /// @dev calldata layout: /// | func sig (4 bytes) | ABI-enc data | /// /// ABI-enc data: /// /// | 32 bytes | 32 bytes | 32 bytes | ... | 32 bytes | ...... | /// | ids len offset | counts len offset | ids len | ids | counts len | counts | assembly { _nodeOperatorIdsOffset := add(calldataload(4), 36) // arg1 calldata offset + 4 (signature len) + 32 (length slot) _stuckValidatorsCountsOffset := add(calldataload(36), 36) // arg2 calldata offset + 4 (signature len) + 32 (length slot)) } for (uint256 i; i < nodeOperatorsCount;) { /// @solidity memory-safe-assembly assembly { nodeOperatorId := shr(192, calldataload(add(_nodeOperatorIdsOffset, mul(i, 8)))) validatorsCount := shr(128, calldataload(add(_stuckValidatorsCountsOffset, mul(i, 16)))) i := add(i, 1) } _requireValidRange(nodeOperatorId < totalNodeOperatorsCount); _updateStuckValidatorsCount(nodeOperatorId, validatorsCount); } _increaseValidatorsKeysNonce(); } // lido/lido-dao/contracts/0.4.24/nos/NodeOperatorsRegistry.sol /** * @notice Set the stuck signings keys count */ function _updateStuckValidatorsCount(uint256 _nodeOperatorId, uint256 _stuckValidatorsCount) internal { Packed64x4.Packed memory stuckPenaltyStats = _loadOperatorStuckPenaltyStats(_nodeOperatorId); uint256 curStuckValidatorsCount = stuckPenaltyStats.get(STUCK_VALIDATORS_COUNT_OFFSET); if (_stuckValidatorsCount == curStuckValidatorsCount) return; Packed64x4.Packed memory signingKeysStats = _loadOperatorSigningKeysStats(_nodeOperatorId); uint256 exitedValidatorsCount = signingKeysStats.get(TOTAL_EXITED_KEYS_COUNT_OFFSET); uint256 depositedValidatorsCount = signingKeysStats.get(TOTAL_DEPOSITED_KEYS_COUNT_OFFSET); // sustain invariant exited + stuck <= deposited assert(depositedValidatorsCount >= exitedValidatorsCount); _requireValidRange(_stuckValidatorsCount <= depositedValidatorsCount - exitedValidatorsCount); uint256 curRefundedValidatorsCount = stuckPenaltyStats.get(REFUNDED_VALIDATORS_COUNT_OFFSET); if (_stuckValidatorsCount <= curRefundedValidatorsCount && curStuckValidatorsCount > curRefundedValidatorsCount) { stuckPenaltyStats.set(STUCK_PENALTY_END_TIMESTAMP_OFFSET, block.timestamp + getStuckPenaltyDelay()); } stuckPenaltyStats.set(STUCK_VALIDATORS_COUNT_OFFSET, _stuckValidatorsCount); _saveOperatorStuckPenaltyStats(_nodeOperatorId, stuckPenaltyStats); emit StuckPenaltyStateChanged( _nodeOperatorId, _stuckValidatorsCount, curRefundedValidatorsCount, stuckPenaltyStats.get(STUCK_PENALTY_END_TIMESTAMP_OFFSET) ); _updateSummaryMaxValidatorsCount(_nodeOperatorId); } /// @notice Updates the number of the refunded validators for node operator with the given id /// @param _nodeOperatorId Id of the node operator /// @param _refundedValidatorsCount New number of refunded validators of the node operator function updateRefundedValidatorsCount(uint256 _nodeOperatorId, uint256 _refundedValidatorsCount) external { _onlyExistedNodeOperator(_nodeOperatorId); _auth(STAKING_ROUTER_ROLE); _updateRefundValidatorsKeysCount(_nodeOperatorId, _refundedValidatorsCount); } function _updateRefundValidatorsKeysCount(uint256 _nodeOperatorId, uint256 _refundedValidatorsCount) internal { Packed64x4.Packed memory stuckPenaltyStats = _loadOperatorStuckPenaltyStats(_nodeOperatorId); uint256 curRefundedValidatorsCount = stuckPenaltyStats.get(REFUNDED_VALIDATORS_COUNT_OFFSET); if (_refundedValidatorsCount == curRefundedValidatorsCount) return; Packed64x4.Packed memory signingKeysStats = _loadOperatorSigningKeysStats(_nodeOperatorId); _requireValidRange(_refundedValidatorsCount <= signingKeysStats.get(TOTAL_DEPOSITED_KEYS_COUNT_OFFSET)); uint256 curStuckValidatorsCount = stuckPenaltyStats.get(STUCK_VALIDATORS_COUNT_OFFSET); if (_refundedValidatorsCount >= curStuckValidatorsCount && curRefundedValidatorsCount < curStuckValidatorsCount) { stuckPenaltyStats.set(STUCK_PENALTY_END_TIMESTAMP_OFFSET, block.timestamp + getStuckPenaltyDelay()); } stuckPenaltyStats.set(REFUNDED_VALIDATORS_COUNT_OFFSET, _refundedValidatorsCount); _saveOperatorStuckPenaltyStats(_nodeOperatorId, stuckPenaltyStats); emit StuckPenaltyStateChanged( _nodeOperatorId, curStuckValidatorsCount, _refundedValidatorsCount, stuckPenaltyStats.get(STUCK_PENALTY_END_TIMESTAMP_OFFSET) ); _updateSummaryMaxValidatorsCount(_nodeOperatorId); }

Turbo vs Bunker mode

The difference between those two modes is basically the condition to finalize withdraw request. The turbo mode is a usual state when requests are finalized as fast as possible, while the bunker mode assumes a more sophisticated requests finalization and activated if it's necessary to socialize the penalties and losses.
For example , there may be rare situations where dramatic slashing events may occur in the near future and thus effect share rate significantly. If Lido finalizes withdraw requests fast(in Turbo mode) during this period, then those early exited staker and those remain in the staking pool may have significantly different share rate. So there is bunker mode used to socialize the penalties aiming to create a situation where users who remain in the staking pool, users who exit within the frame, and users who exit within the nearest frames are in nearly the same conditions. This spreads the loss across all stakers (or socialize the penalties and losses).

Contracts

Network
Name
Functionality
Address
Ethereum
stETH (Lido.sol)
Handle stake operation
Ethereum
wstETH
wrapped stETH
Ethereum
unstETH
withdraw queue which registers NFT represents withdraw request
Ethereum
LidoLocator
contract address register
Ethereum
EIP712StETH
handle EIP712 signature
Ethereum
WithdrawalQueueERC721
handle withdraw request
Ethereum
AccountingOracle
finalize withdraw requests
Ethereum
HashConsensus( of AccountingOracle)
submit consensus report
Ethereum
LDO
Governance token
Ethereum
Beacon Contract
Official stake contract
Ethereum
HashConsensus( of ValidatorsExitBusOracle)
submit consensus report
Ethereum
ValidatorsExitBusOracle
handle validator exit related staff
Ethereum
AccountingOracle
accept report to finalize withdraw requests and distribute reward
Ethereum
OracleReportSanityChecker
check validity of report
Ethereum
StakingRouter
register staking modules
Ethereum
CuratedNodeOperatorsRegistry
staking module controlled by Lido Dao
Ethereum
Burner
handle shares
Ethereum
WithdrawVault
contract accept exited validator’s staked ether and stake reward
Ethereum
DepositSecurityModule
deposit entry

Note

  • ValidatorsExitBusOracle and AccountingOracle have different hash consensus contract

Reference