name
stringlengths 5
231
| severity
stringclasses 3
values | description
stringlengths 107
68.2k
| recommendation
stringlengths 12
8.75k
⌀ | impact
stringlengths 3
11.2k
⌀ | function
stringlengths 15
64.6k
|
---|---|---|---|---|---|
gas limit DoS via unbounded operations | medium | Only one attack will lead to two types of vulnerabilities in `UserManager.sol` and `UToken.sol`\\nOn `UserManager.sol` ==> `updateTrust()` Case one: malicious users (members) can keep `vouching` Alice with `trustAmount == 0` until his `vouchers` array achieves the max limit (2**256-1) So when a normal member tries to give `vouching` to Alice with `trustAmount != 0` he will find because the `vouchers` array completely full.\\nCase two (which is more realistic ): malicious users (members) can keep `vouching` Alice with `trustAmount == 0` until his `vouchers` array achieves late's say 20% of max limit (2**256-1) The problem is when Alice invoke `borrow()` or `repayBorrow()` on `UToken.sol`\\n```\\n IUserManager(userManager).updateLocked(msg.sender, uint96(amount + fee), true);\\n …\\n IUserManager(userManager).updateLocked(borrower, uint96(repayAmount - interest), false);\\n```\\n\\nIt will call `updateLocked()` on `UserManager.sol`\\n```\\n function updateLocked(\\n address borrower,\\n uint96 amount,\\n bool lock\\n ) external onlyMarket {\\n uint96 remaining = amount;\\n\\n for (uint256 i = 0; i < vouchers[borrower].length; i++) {\\n \\n```\\n\\nThe for loop could go through `vouchers[]` which could be long enough to lead to a "gas limit DoS via unbounded operations" And the same thing with `registerMember()`, any user could lose all their fund in this transaction\\n```\\n function registerMember(address newMember) public virtual whenNotPaused {\\n if (stakers[newMember].isMember) revert NoExistingMember();\\n\\n uint256 count = 0;\\n uint256 vouchersLength = vouchers[newMember].length;\\n\\n // Loop through all the vouchers to count how many active vouches there\\n // are that are greater than 0. Vouch is the min of stake and trust\\n for (uint256 i = 0; i < vouchersLength; i++) {\\n```\\n | Add check for `trustAmount == 0` | 1- The user couldn't get any more `vouching` 2- The user will be not able to `borrow()` or `repayBorrow()` 3- No one can in invokeregisterMember() successfully for a specific user | ```\\n IUserManager(userManager).updateLocked(msg.sender, uint96(amount + fee), true);\\n …\\n IUserManager(userManager).updateLocked(borrower, uint96(repayAmount - interest), false);\\n```\\n |
Template implementations doesn't validate configurations properly | medium | In past audits, we have seen contract admins claim that invalidated configuration setters are fine since “admins are trustworthy”. However, cases such as Nomad got drained for over $150M and Misconfiguration in the Acala stablecoin project allows attacker to steal 1.2 billion aUSD have shown again and again that even trustable entities can make mistakes. Thus any fields that might potentially result in insolvency of protocol should be thoroughly checked.\\nNftPort template implementations often ignore checks for config fields. For the rest of the issue, we take `royalty` related fields as an example to illustrate potential consequences of misconfigurations. Notably, lack of check is not limited to `royalty`, but exists among most config fields.\\nAdmins are allowed to set a wrong `royaltiesBps` which is higher than `ROYALTIES_BASIS`. `royaltyInfo()` will accept this invalid `royaltiesBps` and users will pay a large amount of royalty.\\nEIP-2981 (NFT Royalty Standard) defines `royaltyInfo()` function that specifies how much to pay for a given sale price. In general, royalty should not be higher than 100%. NFTCollection.sol checks that admins can't set royalties to more than 100%:\\n```\\n /// Validate a runtime configuration change\\n function _validateRuntimeConfig(RuntimeConfig calldata config)\\n internal\\n view\\n {\\n // Can't set royalties to more than 100%\\n require(config.royaltiesBps <= ROYALTIES_BASIS, "Royalties too high");\\n\\n // rest of code\\n```\\n\\nBut `NFTCollection` only check `royaltiesBps` when admins call `updateConfig()`, it doesn't check `royaltiesBps` in `initialize()` function, leading to admins could set an invalid `royaltiesBps` (higher than 100%) when initializing contracts.\\nThe same problem exists in ERC721NFTProduct and ERC1155NFTProduct. Both ERC721NFTProduct and ERC1155NFTProduct don't check `royaltiesBasisPoints` in `initialize()` function. Furthermore, these contracts also don't check `royaltiesBasisPoints` when admins call `update()` function. It means that admins could set an invalid `royaltiesBasisPoints` which may be higher than 100% in any time. | Issue Template implementations doesn't validate configurations properly\\nCheck `royaltiesBps <= ROYALTIES_BASIS` both in `initialize()` and `update()` functions. | EIP-2981 only defines `royaltyInfo()` that it should return royalty amount rather than royalty percentage. It means that if the contract has an invalid royalty percentage which is higher than 100%, `royaltyInfo()` doesn't revert and users will pay a large amount of royalty. | ```\\n /// Validate a runtime configuration change\\n function _validateRuntimeConfig(RuntimeConfig calldata config)\\n internal\\n view\\n {\\n // Can't set royalties to more than 100%\\n require(config.royaltiesBps <= ROYALTIES_BASIS, "Royalties too high");\\n\\n // rest of code\\n```\\n |
Freezing roles in ERC721NFTProduct and ERC1155NFTProduct is moot | medium | In ERC721NFTProduct and ERC1155NFTProduct roles can be frozen which is supposed to lock role to current addresses and not allow any changes. The problem is that admin can still use AccessControlUpgradable#grantRole and revokeRole to grant and remove roles to addresses because hasRole allows "ADMIN_ROLE" to bypass all role restrictions even "DEFAULT_ADMIN_ROLE".\\n```\\nfunction hasRole(bytes32 role, address account)\\n public\\n view\\n virtual\\n override\\n returns (bool)\\n{\\n return\\n super.hasRole(ADMIN_ROLE, account) || super.hasRole(role, account);\\n}\\n```\\n\\nIn GranularRoles.sol and AccessControlUpgradable.sol, developers are careful to never grant the "DEFAULT_ADMIN_ROLE" to any user. Additionally they never set the admin role of any role so that it's admin will remain "DEFAULT_ADMIN_ROLE". In theory this should make so that there is no way to grant or revoke roles outside of GranularRoles#_initRoles and updateRoles. The issue is that the override by GranularRoles#hasRole allows "ADMIN_ROLE" to bypass any role restriction including "DEFAULT_ADMIN_ROLE". This allows "ADMIN_ROLE" to directly call AccessControlUpgradable#grantRole and revokeRole, which makes the entire freezing system useless as it doesn't actually stop any role modification. | Override AccessControlUpgradable#grantRole and revokeRole in GranularRoles.sol to revert when called:\\n```\\n GranularRoles.sol\\n\\n+ function grantRole(bytes32 role, address account) public virtual override {\\n+ revert();\\n+ }\\n\\n+ function revokeRole(bytes32 role, address account) public virtual override {\\n+ revert();\\n+ }\\n```\\n | Freezing roles doesn't actually prevent "ADMIN_ROLE" from modifying roles as intended. Submitting as high due to gross over-extension of admin authority clearly violating intended guardrails. | ```\\nfunction hasRole(bytes32 role, address account)\\n public\\n view\\n virtual\\n override\\n returns (bool)\\n{\\n return\\n super.hasRole(ADMIN_ROLE, account) || super.hasRole(role, account);\\n}\\n```\\n |
registerTemplate() can't handle properly when ITemplate version is 0 | medium | Factory.sol when register one template , and template ' s version is 0, the latestImplementation[templateName] will be address(0) and add other version, "_templateNames" will duplicate\\nWhen version is equal 0 latestImplementation[templateName] don't set\\n```\\n function _setTemplate(\\n string memory templateName,\\n uint256 templateVersion,\\n address implementationAddress\\n ) internal {\\n// rest of code\\n\\n if (latestImplementation[templateName] == address(0)) { /****add other version, _templateNames will duplicate ****/\\n _templateNames.push(templateName);\\n }\\n\\n if (templateVersion > latestVersion[templateName]) {\\n latestVersion[templateName] = templateVersion;\\n latestImplementation[templateName] = implementationAddress; /****templateVersion==0 , don't set ****/\\n }\\n\\n }\\n```\\n | ```\\n function _setTemplate(\\n string memory templateName,\\n uint256 templateVersion,\\n address implementationAddress\\n ) internal {\\n\\n - if (templateVersion > latestVersion[templateName]) {\\n + if (templateVersion > = latestVersion[templateName]) {\\n latestVersion[templateName] = templateVersion;\\n latestImplementation[templateName] = implementationAddress; \\n }\\n```\\n | latestImplementation[templateName] and _templateNames will error. external contracts may think there is no setup, resulting in duplicate setups that keep failing | ```\\n function _setTemplate(\\n string memory templateName,\\n uint256 templateVersion,\\n address implementationAddress\\n ) internal {\\n// rest of code\\n\\n if (latestImplementation[templateName] == address(0)) { /****add other version, _templateNames will duplicate ****/\\n _templateNames.push(templateName);\\n }\\n\\n if (templateVersion > latestVersion[templateName]) {\\n latestVersion[templateName] = templateVersion;\\n latestImplementation[templateName] = implementationAddress; /****templateVersion==0 , don't set ****/\\n }\\n\\n }\\n```\\n |
Factory uses signature that do not have expiration | medium | NftPort can't remove license from user, once the signature was provided to it, without changing `SIGNER_ROLE` address.\\nIn Factory contract there are few methods that are called when signed by trusted signer.\\nThis is how the signature is checked\\n```\\nsignedOnly(abi.encodePacked(msg.sender, instance, data), signature)\\n```\\n\\nAs you can see there is no any expiration time. That means that once, the signer has signed the signature for the user it can use it for the end of life. It's like lifetime license. The only option to remove the license from user is to revoke `SIGNER_ROLE` and set it to another account. But it's possible that the NFTPort will have a need to do that with current signer. | Add expiration param to the signature. | License can't be removed. | ```\\nsignedOnly(abi.encodePacked(msg.sender, instance, data), signature)\\n```\\n |
Underflow in ```_previewWithdraw``` could prevent withdrawals | high | An underflow in the `_previewWithdraw` function in `AuctionInternal.sol` due to totalContractsSold exceeding auction.totalContracts could prevent users from withdrawing options.\\nThe `_previewWithdraw` function returns the fill and refund amounts for a buyer by looping over all orders. A totalContractsSold variable is used to track the amount of contracts sold as the loop iterates over all orders. If the current order's size + totalContractsSold exceeds the auction's totalContracts then the order will only be filled partially. The calculation for the partial fill (remainder) is given on line 318. This will lead to an underflow if totalContractsSold > the auction's totalContracts which would happen if there are multiple orders that cause the totalContractsSold variable to exceed totalContracts.\\nThe totalContractsSold variable in `_previewWithdraw` could exceed the auction.totalContracts due to the contracts sold before the start of an auction through limit orders not being limited. When an order is added, `_finalizeAuction` is only called if the auction has started. The `_finalizeAuction` function will call the `_processOrders` function which will return true if the auction has reached 100% utilization. Since limit orders can be made before the start of an auction, `_finalizeAuction` is not called and any amount of new orders may be made.\\nExample: The buyer makes a limit order with size > auction.totalContracts. They then make another order with size of anything. These orders are made before the start of the auction so `_processOrders` is not called for every new order and totalContractsSold can exceed totalContracts. When `_previewWithdraw` is called, after the buyer's first order is processed, totalContractsSold > auction.totalContracts so the condition on line 313 passes. Since totalContractsSold > auction.totalContracts the calculation on line 318 underflows and the transaction reverts. The `_previewWithdraw` function and thus the `_withdraw` function is uncallable.\\nTest code added to `Auction.behaviour.ts`, under the `#addLimitOrder(uint64,int128,uint256)` section:\\n```\\n it("previewWithdraw reverts if buyer has too many contracts", async () => {\\n assert.isEmpty(await auction.getEpochsByBuyer(addresses.buyer1));\\n\\n await asset\\n .connect(signers.buyer1)\\n .approve(addresses.auction, ethers.constants.MaxUint256);\\n\\n const totalContracts = await auction.getTotalContracts(epoch);\\n await auction.addLimitOrder(\\n epoch,\\n fixedFromFloat(params.price.max),\\n totalContracts.mul(2)\\n );\\n\\n await auction.addLimitOrder(\\n epoch,\\n fixedFromFloat(params.price.max),\\n totalContracts.div(2)\\n );\\n\\n const epochByBuyer = await auction.getEpochsByBuyer(addresses.buyer1);\\n\\n assert.equal(epochByBuyer.length, 1);\\n assert.bnEqual(epochByBuyer[0], epoch);\\n \\n await expect(auction.callStatic[\\n "previewWithdraw(uint64)"\\n ](epoch)).to.be.reverted;\\n });\\n```\\n\\nThe test code above shows a buyer is able to add an order with size auction.totalContracts*2 and a subsequent order with size auction.totalContracts/2. The `previewWithdraw` function reverts when called. | The loop in `_previewWithdraw` should check if the current totalContractsSold is >= totalContracts. If it is then the remainder should be set to 0 which would allow the current order to be fully refunded.\\nAdditionally, the orders for an auction should be checked before the auction starts. In `_addOrder`, consider adding a condition that will call `_processOrders` if the auction has not started yet. If `_processOrders` returns true then do not allow the order to be added. Or just allow the auction to be finalized before it starts if the total contracts sold has reached the auction's totalContracts. | Users would be unable to withdraw from the Auction contract. | ```\\n it("previewWithdraw reverts if buyer has too many contracts", async () => {\\n assert.isEmpty(await auction.getEpochsByBuyer(addresses.buyer1));\\n\\n await asset\\n .connect(signers.buyer1)\\n .approve(addresses.auction, ethers.constants.MaxUint256);\\n\\n const totalContracts = await auction.getTotalContracts(epoch);\\n await auction.addLimitOrder(\\n epoch,\\n fixedFromFloat(params.price.max),\\n totalContracts.mul(2)\\n );\\n\\n await auction.addLimitOrder(\\n epoch,\\n fixedFromFloat(params.price.max),\\n totalContracts.div(2)\\n );\\n\\n const epochByBuyer = await auction.getEpochsByBuyer(addresses.buyer1);\\n\\n assert.equal(epochByBuyer.length, 1);\\n assert.bnEqual(epochByBuyer[0], epoch);\\n \\n await expect(auction.callStatic[\\n "previewWithdraw(uint64)"\\n ](epoch)).to.be.reverted;\\n });\\n```\\n |
Users can avoid performance fees by withdrawing before the end of the epoch forcing other users to pay their fees | medium | No performance fees are taken when user withdraws early from the vault but their withdrawal value will be used to take fees, which will be taken from other users.\\n```\\nuint256 adjustedTotalAssets = _totalAssets() + l.totalWithdrawals;\\n\\nif (adjustedTotalAssets > l.lastTotalAssets) {\\n netIncome = adjustedTotalAssets - l.lastTotalAssets;\\n\\n feeInCollateral = l.performanceFee64x64.mulu(netIncome);\\n\\n ERC20.safeTransfer(l.feeRecipient, feeInCollateral);\\n}\\n```\\n\\nWhen taking the performance fees, it factors in both the current assets of the vault as well as the total value of withdrawals that happened during the epoch. Fees are paid from the collateral tokens in the vault, at the end of the epoch. Paying the fees like this reduces the share price of all users, which effectively works as a fee applied to all users. The problem is that withdraws that take place during the epoch are not subject to this fee and the total value of all their withdrawals are added to the adjusted assets of the vault. This means that they don't pay any performance fee but the fee is still taken from the vault collateral. In effect they completely avoid the fee force all there other users of the vault to pay it for them. | Fees should be taken on withdrawals that occur before vault is settled | User can avoid performance fees and force other users to pay them | ```\\nuint256 adjustedTotalAssets = _totalAssets() + l.totalWithdrawals;\\n\\nif (adjustedTotalAssets > l.lastTotalAssets) {\\n netIncome = adjustedTotalAssets - l.lastTotalAssets;\\n\\n feeInCollateral = l.performanceFee64x64.mulu(netIncome);\\n\\n ERC20.safeTransfer(l.feeRecipient, feeInCollateral);\\n}\\n```\\n |
processAuction() in VaultAdmin.sol can be called multiple times by keeper if the auction is canceled. | medium | processAuction() in VaultAdmin.sol can be called multiple times by keeper if the auction is canceled.\\nprocessAuction() in VaultAdmin.sol can be called multiple times by keeper, the code below would execute more than one times if the auction is canceled.\\nbecause it is the line of code inside the function processAuction in VaultAdmin.sol below that can change the auction status to PROCESSED.\\nthis code only runs when the auction is finalized, it not finalized, the auction is in Canceled State and\\n```\\n bool cancelled = l.Auction.isCancelled(lastEpoch);\\n bool finalized = l.Auction.isFinalized(lastEpoch);\\n\\n require(\\n (!finalized && cancelled) || (finalized && !cancelled),\\n "auction is not finalized nor cancelled"\\n );\\n```\\n\\nwould always pass because the auction is in cancel state. | Issue processAuction() in VaultAdmin.sol can be called multiple times by keeper if the auction is canceled.\\nWe recommend the project lock the epoch and make it impossible for keeper to call the processAuction again. | Why the processAuction should not be called multiple times?\\nIn the first time it is called, the withdrawal lock is released so user can withdraw fund,\\n```\\n // deactivates withdrawal lock\\n l.auctionProcessed = true;\\n```\\n\\nthen if we called again, the lastTotalAssets can be updated multiple times.\\n```\\n // stores the last total asset amount, this is effectively the amount of assets held\\n // in the vault at the start of the auction\\n l.lastTotalAssets = _totalAssets();\\n```\\n\\nthe total asset can be lower and lower because people are withdrawing their fund.\\nthen when _collectPerformanceFee is called, the performance may still be collected | ```\\n bool cancelled = l.Auction.isCancelled(lastEpoch);\\n bool finalized = l.Auction.isFinalized(lastEpoch);\\n\\n require(\\n (!finalized && cancelled) || (finalized && !cancelled),\\n "auction is not finalized nor cancelled"\\n );\\n```\\n |
`TradingUtils._executeTrade()` doesn't check `preTradeBalance` properly. | high | `TradingUtils._executeTrade()` doesn't check `preTradeBalance` properly.\\n`TradingUtils._executeTrade()` doesn't check `preTradeBalance` properly.\\n```\\nfunction _executeTrade(\\n address target,\\n uint256 msgValue,\\n bytes memory params,\\n address spender,\\n Trade memory trade\\n) private {\\n uint256 preTradeBalance;\\n\\n if (trade.sellToken == address(Deployments.WETH) && spender == Deployments.ETH_ADDRESS) {\\n preTradeBalance = address(this).balance;\\n // Curve doesn't support Deployments.WETH (spender == address(0))\\n uint256 withdrawAmount = _isExactIn(trade) ? trade.amount : trade.limit;\\n Deployments.WETH.withdraw(withdrawAmount);\\n } else if (trade.sellToken == Deployments.ETH_ADDRESS && spender != Deployments.ETH_ADDRESS) {\\n preTradeBalance = IERC20(address(Deployments.WETH)).balanceOf(address(this));\\n // UniswapV3 doesn't support ETH (spender != address(0))\\n uint256 depositAmount = _isExactIn(trade) ? trade.amount : trade.limit;\\n Deployments.WETH.deposit{value: depositAmount }();\\n }\\n\\n (bool success, bytes memory returnData) = target.call{value: msgValue}(params);\\n if (!success) revert TradeExecution(returnData);\\n\\n if (trade.buyToken == address(Deployments.WETH)) {\\n if (address(this).balance > preTradeBalance) {\\n // If the caller specifies that they want to receive Deployments.WETH but we have received ETH,\\n // wrap the ETH to Deployments.WETH.\\n uint256 depositAmount;\\n unchecked { depositAmount = address(this).balance - preTradeBalance; }\\n Deployments.WETH.deposit{value: depositAmount}();\\n }\\n } else if (trade.buyToken == Deployments.ETH_ADDRESS) {\\n uint256 postTradeBalance = IERC20(address(Deployments.WETH)).balanceOf(address(this));\\n if (postTradeBalance > preTradeBalance) {\\n // If the caller specifies that they want to receive ETH but we have received Deployments.WETH,\\n // unwrap the Deployments.WETH to ETH.\\n uint256 withdrawAmount;\\n unchecked { withdrawAmount = postTradeBalance - preTradeBalance; }\\n Deployments.WETH.withdraw(withdrawAmount);\\n }\\n }\\n}\\n```\\n\\nIt uses `preTradeBalance` to manage the WETH/ETH deposits and withdrawals.\\nBut it doesn't save the correct `preTradeBalance` for some cases.\\nLet's assume `trade.sellToken = some ERC20 token(not WETH/ETH), trade.buyToken = WETH`\\nBefore executing the trade, `preTradeBalance` will be 0 as both `if` conditions are false.\\nThen all ETH inside the contract will be converted to WETH and considered as a `amountBought` here and here.\\nAfter all, all ETH of the contract will be lost.\\nAll WETH of the contract will be lost also when `trade.sellToken = some ERC20 token(not WETH/ETH), trade.buyToken = ETH` here. | We should check `preTradeBalance` properly. We can remove the current code for `preTradeBalance` and insert the below code before executing the trade.\\n```\\nif (trade.buyToken == address(Deployments.WETH)) {\\n preTradeBalance = address(this).balance;\\n} else if (trade.buyToken == Deployments.ETH_ADDRESS) {\\n preTradeBalance = IERC20(address(Deployments.WETH)).balanceOf(address(this));\\n}\\n```\\n | All of ETH/WETH balance of the contract might be lost in some cases. | ```\\nfunction _executeTrade(\\n address target,\\n uint256 msgValue,\\n bytes memory params,\\n address spender,\\n Trade memory trade\\n) private {\\n uint256 preTradeBalance;\\n\\n if (trade.sellToken == address(Deployments.WETH) && spender == Deployments.ETH_ADDRESS) {\\n preTradeBalance = address(this).balance;\\n // Curve doesn't support Deployments.WETH (spender == address(0))\\n uint256 withdrawAmount = _isExactIn(trade) ? trade.amount : trade.limit;\\n Deployments.WETH.withdraw(withdrawAmount);\\n } else if (trade.sellToken == Deployments.ETH_ADDRESS && spender != Deployments.ETH_ADDRESS) {\\n preTradeBalance = IERC20(address(Deployments.WETH)).balanceOf(address(this));\\n // UniswapV3 doesn't support ETH (spender != address(0))\\n uint256 depositAmount = _isExactIn(trade) ? trade.amount : trade.limit;\\n Deployments.WETH.deposit{value: depositAmount }();\\n }\\n\\n (bool success, bytes memory returnData) = target.call{value: msgValue}(params);\\n if (!success) revert TradeExecution(returnData);\\n\\n if (trade.buyToken == address(Deployments.WETH)) {\\n if (address(this).balance > preTradeBalance) {\\n // If the caller specifies that they want to receive Deployments.WETH but we have received ETH,\\n // wrap the ETH to Deployments.WETH.\\n uint256 depositAmount;\\n unchecked { depositAmount = address(this).balance - preTradeBalance; }\\n Deployments.WETH.deposit{value: depositAmount}();\\n }\\n } else if (trade.buyToken == Deployments.ETH_ADDRESS) {\\n uint256 postTradeBalance = IERC20(address(Deployments.WETH)).balanceOf(address(this));\\n if (postTradeBalance > preTradeBalance) {\\n // If the caller specifies that they want to receive ETH but we have received Deployments.WETH,\\n // unwrap the Deployments.WETH to ETH.\\n uint256 withdrawAmount;\\n unchecked { withdrawAmount = postTradeBalance - preTradeBalance; }\\n Deployments.WETH.withdraw(withdrawAmount);\\n }\\n }\\n}\\n```\\n |
Bought/Purchased Token Can Be Sent To Attacker's Wallet Using 0x Adaptor | high | The lack of recipient validation against the 0x order within the 0x adaptor (ZeroExAdapter) allows the purchased/output tokens of the trade to be sent to the attacker's wallet.\\nBackground\\nHow does the emergency vault settlement process work?\\nAnyone can call the `settleVaultEmergency` function to trigger the emergency vault settlement as it is permissionless\\nThe `_getEmergencySettlementParams` function will calculate the excess BPT tokens within the vault to be settled/sold\\nThe amount of excess BPT tokens will be converted to an equivalence amount of strategy tokens to be settled\\nThe strategy tokens will be settled by withdrawing staked BPT tokens from Aura Finance back to the vault for redemption.\\nThe vault will then redeem the BTP tokens from Balancer to redeem its underlying assets (WETH and stETH)\\nThe primary and secondary assets of the vault are WETH and stETH respectively. The secondary asset (stETH) will be traded for the primary asset (WETH) in one of the supported DEXes. In the end, only the primary assets (WETH) should remain within the vault.\\nThe WETH within the vault will be sent to Notional, and Notional will mint the asset tokens (cEther) for the vault in return.\\nAfter completing the emergency vault settlement process, the vault will gain asset tokens (cEther) after settling/selling its excess BPT tokens.\\nIssue Description\\nThe caller of the `settleVaultEmergency` function can specify the trade parameters to sell the secondary tokens (stETH) for primary tokens (WETH) in any of the supported 5 DEX protocols (Curve, Balancer V2, Uniswap V2 & V3 and 0x) in Step 5 of the above emergency vault settlement process.\\nAfter analyzing the adaptors of 5 DEX protocols (Curve, Balancer V2, Uniswap V2 & V3 and 0x), it was observed that Curve, Balancer V2, Uniswap V2, and Uniswap V3 are designed in a way that the purchased tokens can only be returned to the vault.\\nTake the Uniswap V2 adaptor as an example. When the vault triggers the trade execution, it will always pass its own address `address(this)` `to` the `from` parameter of the `getExecutionData` function. The value of `from` parameter will be passed `to` the `to` parameter of Uniswap's `swapExactTokensForTokens` function, which indicates the recipient of the output/purchased tokens. Therefore, it is impossible for the caller `to` specify the recipient of the output tokens `to` another address. This is also the same for Curve, Balancer V2, and Uniswap V3.\\n```\\nFile: UniV2Adapter.sol\\n function getExecutionData(address from, Trade calldata trade)\\n..SNIP..\\n executionCallData = abi.encodeWithSelector(\\n IUniV2Router2.swapExactTokensForTokens.selector,\\n trade.amount,\\n trade.limit,\\n data.path,\\n from,\\n trade.deadline\\n );\\n```\\n\\nHowever, this is not implemented for the 0x adaptor (ZeroExAdapter). The `from` of the `getExecutionData` is completely ignored, and the caller has the full flexibility of crafting an order that benefits the caller.\\n```\\nFile: ZeroExAdapter.sol\\nlibrary ZeroExAdapter {\\n /// @dev executeTrade validates pre and post trade balances and also\\n /// sets and revokes all approvals. We are also only calling a trusted\\n /// zero ex proxy in this case. Therefore no order validation is done\\n /// to allow for flexibility.\\n function getExecutionData(address from, Trade calldata trade)\\n internal view returns (\\n address spender,\\n address target,\\n uint256 /* msgValue */,\\n bytes memory executionCallData\\n )\\n {\\n spender = Deployments.ZERO_EX;\\n target = Deployments.ZERO_EX;\\n // msgValue is always zero\\n executionCallData = trade.exchangeData;\\n }\\n}\\n```\\n\\nA number of features are supported by 0x. The full list of the supported features can be found here. Specifically, the following are the functions of attacker interest because it allows the attacker to configure the `recipient` parameter so that the bought tokens will be redirected to the attacker's wallet instead of the vault.\\nLiquidityProviderFeature - sellToLiquidityProvider\\n```\\n /// @dev Sells `sellAmount` of `inputToken` to the liquidity provider\\n /// at the given `provider` address.\\n /// @param inputToken The token being sold.\\n /// @param outputToken The token being bought.\\n /// @param provider The address of the on-chain liquidity provider\\n /// to trade with.\\n /// @param recipient The recipient of the bought tokens. If equal to\\n /// address(0), `msg.sender` is assumed to be the recipient.\\n /// @param sellAmount The amount of `inputToken` to sell.\\n /// @param minBuyAmount The minimum acceptable amount of `outputToken` to\\n /// buy. Reverts if this amount is not satisfied.\\n /// @param auxiliaryData Auxiliary data supplied to the `provider` contract.\\n /// @return boughtAmount The amount of `outputToken` bought.\\n function sellToLiquidityProvider(\\n IERC20TokenV06 inputToken,\\n IERC20TokenV06 outputToken,\\n ILiquidityProvider provider,\\n address recipient,\\n uint256 sellAmount,\\n uint256 minBuyAmount,\\n bytes calldata auxiliaryData\\n )\\n```\\n\\nUniswapV3Feature - sellTokenForTokenToUniswapV3\\n```\\n /// @dev Sell a token for another token directly against uniswap v3.\\n /// @param encodedPath Uniswap-encoded path.\\n /// @param sellAmount amount of the first token in the path to sell.\\n /// @param minBuyAmount Minimum amount of the last token in the path to buy.\\n /// @param recipient The recipient of the bought tokens. Can be zero for sender.\\n /// @return buyAmount Amount of the last token in the path bought.\\n function sellTokenForTokenToUniswapV3(\\n bytes memory encodedPath,\\n uint256 sellAmount,\\n uint256 minBuyAmount,\\n address recipient\\n )\\n```\\n\\nThe malicious user could perform the following actions to steal the assets:\\nAllow malicious users to specify the recipient of the output/purchased tokens to be themselves instead of the vault. This will cause the output/purchased tokens of the trade to be redirected to the malicious users instead of the vault\\nSpecify the `minBuyAmount` parameter of the order to `1 WEI` so that he only needs to provide `1 WEI` to fill the order to obtain all the secondary token (stETH) that need to be sold. This is allowed as there is no slippage control within 0x adaptor (Refer to my "No Slippage Control If The Trade Executes Via 0x DEX During Emergency Vault Settlement" issue write-up) | It is recommended to implement validation against the submitted 0x trade order to ensure that the recipient of the bought tokens is set to the vault when using the 0x DEX. Consider implementing the following validation checks.\\n```\\nlibrary ZeroExAdapter {\\n /// @dev executeTrade validates pre and post trade balances and also\\n /// sets and revokes all approvals. We are also only calling a trusted\\n /// zero ex proxy in this case. Therefore no order validation is done\\n /// to allow for flexibility.\\n function getExecutionData(address from, Trade calldata trade)\\n internal view returns (\\n address spender,\\n address target,\\n uint256 /* msgValue */,\\n bytes memory executionCallData\\n )\\n {\\n spender = Deployments.ZERO_EX;\\n target = Deployments.ZERO_EX;\\n \\n _validateExchangeData(from, trade);\\n \\n // msgValue is always zero\\n executionCallData = trade.exchangeData;\\n }\\n \\n function _validateExchangeData(address from, Trade calldata trade) internal pure {\\n bytes calldata _data = trade.exchangeData;\\n\\n address inputToken;\\n address outputToken;\\n address recipient;\\n uint256 inputTokenAmount;\\n uint256 minOutputTokenAmount;\\n\\n require(_data.length >= 4, "Invalid calldata");\\n bytes4 selector;\\n assembly {\\n selector := and(\\n // Read the first 4 bytes of the _data array from calldata.\\n calldataload(add(36, calldataload(164))), // 164 = 5 * 32 + 4\\n 0xffffffff00000000000000000000000000000000000000000000000000000000\\n )\\n }\\n \\n if (selector == 0xf7fcd384) {\\n \\n (\\n inputToken, \\n outputToken, \\n , \\n recipient, \\n inputTokenAmount, \\n minOutputTokenAmount\\n ) = abi.decode(_data[4:], (address, address, address, address, uint256, uint256));\\n require(recipient == from, "Mismatched recipient");\\n } else if (selector == 0x6af479b2) {\\n // sellTokenForTokenToUniswapV3()\\n bytes memory encodedPath;\\n // prettier-ignore\\n (\\n encodedPath,\\n inputTokenAmount, \\n minOutputTokenAmount, \\n recipient\\n ) = abi.decode(_data[4:], (bytes, uint256, uint256, address));\\n require(recipient == from, "Mismatched recipient");\\n }\\n }\\n}\\n```\\n | Attackers can craft a 0x order that redirects the assets to their wallet, leading to loss of assets for the vaults and their users. | ```\\nFile: UniV2Adapter.sol\\n function getExecutionData(address from, Trade calldata trade)\\n..SNIP..\\n executionCallData = abi.encodeWithSelector(\\n IUniV2Router2.swapExactTokensForTokens.selector,\\n trade.amount,\\n trade.limit,\\n data.path,\\n from,\\n trade.deadline\\n );\\n```\\n |
Settlement slippage is not implemented correctly which may lead to some vaults being impossible to settle | high | The contract is supposed to implement a different max slippage value depending on the settlement type, but these values have no impact because they are never actually applied. Instead, regardless of settlement type or function inputs, max slippage will always be limited to the value of balancerPoolSlippageLimitPercent. This can be problematic because the default value allows only 1% slippage. If settlement slippage goes outside of 1% then settlement of any kind will become impossible.\\nBoosted3TokenAuraHelper.sol#L95-L99\\n```\\n params.minPrimary = poolContext._getTimeWeightedPrimaryBalance(\\n oracleContext, strategyContext, bptToSettle\\n );\\n\\n params.minPrimary = params.minPrimary * strategyContext.vaultSettings.balancerPoolSlippageLimitPercent / \\n uint256(BalancerConstants.VAULT_PERCENT_BASIS);\\n```\\n\\nBoosted3TokenAuraHelper#_executeSettlement first sets params.minPrimary overwriting any value from function input. Next it adjusts minPrimary by balancerPoolSlippageLimitPercent, which is a constant set at pool creation; however it doesn't ever adjust it by Params.DynamicTradeParams.oracleSlippagePercent. This means that the max possible slippage regardless of settlement type is limited to the slippage allowed by balancerPoolSlippageLimitPercent. If the max slippage ever goes outside of this range, then settlement of any kind will become impossible. | Params.DynamicTradeParams.oracleSlippagePercent is validated in every scenario before Boosted3TokenAuraHelper#_executeSettlement is called, so we can apply these values directly when calculating minPrimary:\\n```\\n params.minPrimary = poolContext._getTimeWeightedPrimaryBalance(\\n oracleContext, strategyContext, bptToSettle\\n );\\n\\n+ DynamicTradeParams memory callbackData = abi.decode(\\n+ params.secondaryTradeParams, (DynamicTradeParams)\\n+ );\\n\\n- params.minPrimary = params.minPrimary * strategyContext.vaultSettings.balancerPoolSlippageLimitPercent / \\n+ params.minPrimary = params.minPrimary * \\n+ (strategyContext.vaultSettings.balancerPoolSlippageLimitPercent - callbackData.oracleSlippagePercent) / \\n uint256(BalancerConstants.VAULT_PERCENT_BASIS);\\n```\\n | Settlement may become impossible | ```\\n params.minPrimary = poolContext._getTimeWeightedPrimaryBalance(\\n oracleContext, strategyContext, bptToSettle\\n );\\n\\n params.minPrimary = params.minPrimary * strategyContext.vaultSettings.balancerPoolSlippageLimitPercent / \\n uint256(BalancerConstants.VAULT_PERCENT_BASIS);\\n```\\n |
Gain From Balancer Vaults Can Be Stolen | medium | The BPT gain (rewards) of the vault can be stolen by an attacker.\\nAt T0 (Time 0), assume that the state of the WETH/wstETH MetaPool Vault is as follows:\\ntotalBPTHeld = 1000 BPT\\ntotalStrategyTokenGlobal = 1000\\n1 Strategy Token can claim 1 BPT\\nAlice holds 1000 Strategy Tokens, and she is the only person invested in the vault at this point in time\\nAssume that if the `reinvestReward` is called, it will reinvest 1000 BPT back into the vault. Thus, if the `reinvestReward` is called, the `totalBPTHeld` of the vault will become 2000 BPT.\\nFollowing is the description of the attack:\\nThe attacker notice that if the `reinvestReward` is called, it will result in a large increase in the total BPT held by the vault\\nThe attacker flash-loan a large amount of WETH (e.g. 1,000,000) from a lending protocol (e.g. dydx)\\nEnter the vault by depositing 1,000,000 WETH by calling the `VaultAccountAction.enterVault` function. However, do not borrow any cash from Notional by setting the `fCash` parameter of the `VaultAccountAction.enterVault` function to `0`.\\nThere is no need to borrow from Notional as the attacker could already flash-loan a large amount of WETH with a non-existence fee rate (e.g. 1 Wei in dydx). Most importantly, the vault fee will only be charged if the user borrows from Notional. The fee is assessed within the `VaultAccount._borrowIntoVault`, which will be skipped if users are not borrowing. By not borrowing from Notional, the attacker does not need to pay any fee when entering the vault and this will make the attacker more profitable.\\nThe vault will deposit 1,000,000 WETH to the Balancer pool and receive a large amount of BPT in return. For simplicity's sake, assume that the vault receives 1,000,000 BPT in return.\\nBased on the `StrategyUtils._convertBPTClaimToStrategyTokens` function, the attacker will receive 100,000 strategy tokens. The state of the vault will be as follows after the attacker deposits:\\ntotalBPTHeld = 1,001,000 BPT\\ntotalStrategyTokenGlobal = 1,001,000\\n1 Strategy Token can claim 1 BPT\\nAlice holds 1000 Strategy Tokens\\nAttacker holds 1,000,000 Strategy Tokens\\nThe attacker calls the `reinvestReward` function, and reward tokens will be reinvested. Assume that the vault receives 1000 BPT. The state of the vault will be as follows after the reinvest:\\ntotalBPTHeld = 1,002,000 BPT\\ntotalStrategyTokenGlobal = 1,001,000\\n1 Strategy Token can claim ~1.0009 BPT\\nAlice holds 1000 Strategy Tokens\\nAttacker holds 1,000,000 Strategy Tokens\\nThe attacker exits the vault with all his strategy tokens by calling the `VaultAccountAction.exitVault` function. This will cause the vault the redeem all the 100,000 Strategy Tokens owned by the attacker. Based on the `StrategyUtils._convertStrategyTokensToBPTClaim` function, the attacker will receive 1,000,999 BPT in return. Note that there is no fee for exiting the vault and there is no need for repaying the debt as the attacker did not borrow any assets from Notional at the beginning.\\n```\\nbptClaim = (strategyTokenAmount * context.totalBPTHeld) / context.vaultState.totalStrategyTokenGlobal;\\n1,000,999 = (1000000 * 1002000) / 1001000\\n```\\n\\nProceed to repay the flash-loan at the end of the transaction. All the above steps are executed within a single transaction. Within a single transaction/block, the attacker is able to increase his holding of 1,000,000 BPT to 1,000,999 BPT after calling the `reinvestReward` function, and effectively gain around 999 BPT.\\nAlice who had been invested in the vault since the vault was first launched should be entitled to the majority of the rewards (Close to 1000 BPT). However, the attacker who came in right before the `reinvestReward` function was triggered managed to obtain almost all of her allocated shares of rewards (999 BPT) and left only 1 BPT for Alice.\\nNote: A flash-loan is not required if the attacker has sufficient liquidity to carry out the attack or the vault does not have much liquidity.\\nFollowing are the two functions for converting between BPT and Strategy Token for reference.\\n```\\n/// @notice Converts BPT to strategy tokens\\nfunction _convertBPTClaimToStrategyTokens(StrategyContext memory context, uint256 bptClaim)\\n internal pure returns (uint256 strategyTokenAmount) {\\n if (context.totalBPTHeld == 0) {\\n // Strategy tokens are in 8 decimal precision, BPT is in 18. Scale the minted amount down.\\n return (bptClaim * uint256(Constants.INTERNAL_TOKEN_PRECISION)) / \\n BalancerConstants.BALANCER_PRECISION;\\n }\\n\\n // BPT held in maturity is calculated before the new BPT tokens are minted, so this calculation\\n // is the tokens minted that will give the account a corresponding share of the new bpt balance held.\\n // The precision here will be the same as strategy token supply.\\n strategyTokenAmount = (bptClaim * context.vaultState.totalStrategyTokenGlobal) / context.totalBPTHeld;\\n}\\n```\\n\\n```\\n/// @notice Converts strategy tokens to BPT\\nfunction _convertStrategyTokensToBPTClaim(StrategyContext memory context, uint256 strategyTokenAmount)\\n internal pure returns (uint256 bptClaim) {\\n require(strategyTokenAmount <= context.vaultState.totalStrategyTokenGlobal);\\n if (context.vaultState.totalStrategyTokenGlobal > 0) {\\n bptClaim = (strategyTokenAmount * context.totalBPTHeld) / context.vaultState.totalStrategyTokenGlobal;\\n }\\n}\\n```\\n | Following are the list of root causes of the issue and some recommendation to mitigate them.\\n`reinvestReward` function is permissionless and can be called by anyone. It is recommended to implement access control to ensure that this function can only be triggered by Notional. Do note that even if the attacker cannot trigger the `reinvestReward` function, it is still possible for the attacker to front-run and back-end the `reinvestReward` transaction to carry out the attack if they see this transaction in the public mempool. Thus, consider sending the `reinvestReward` transaction as a private transaction via Flashbot so that the attacker cannot sandwich the transaction.\\nThere is no withdrawal fee. Also, there is no deposit fee as long as users did not borrow from Notional. Therefore, this attack is mostly profitable. It is recommended to impose a fee on the users of the vault even if the users did not borrow from Notional. All users should be charged a fee for the use of the vault. This will make the attack less likely to be profitable in most cases.\\nUsers can enter and exit the vault within the same transaction/block. This allows the attacker to leverage the flash-loan facility to reduce the cost of the attack to almost nothing. It is recommended to prevent users from entering and exiting the vault within the same transaction/block. If the user entered the vault in this block, he/she could only exit at the next block.\\nThere is no snapshotting to keep track of the deposit to ensure that BPT gain/rewards distributions are weighted according to deposit duration. Thus, a whale could deposit right before the `reinvestReward` function is triggered and exit the vault afterward and reap most of the gains. Consider implementing snapshotting within the vault. | Loss of assets for the users as their BPT gain (rewards) can be stolen. This issue affects all balancer-related vaults that contain the permissionless `reinvestReward` function. | ```\\nbptClaim = (strategyTokenAmount * context.totalBPTHeld) / context.vaultState.totalStrategyTokenGlobal;\\n1,000,999 = (1000000 * 1002000) / 1001000\\n```\\n |
Malicious Users Can Deny Notional Treasury From Receiving Fee | medium | Malicious users can deny Notional Treasury from receiving fees when rewards are reinvested.\\nThe `claimRewardTokens` function will harvest the reward tokens from the Aura Pool, and the reward tokens will be transferred to the Balancer Vault. At lines 77-78, a portion of the reward tokens would be sent to the `FEE_RECEIVER`. After clarifying with the sponsor, it was understood that the `FEE_RECEIVER` would be set to Notional Treasury so that it would receive some of the accrued reward tokens.\\n```\\nFile: AuraStakingMixin.sol\\n function claimRewardTokens() external returns (uint256[] memory claimedBalances) {\\n uint16 feePercentage = BalancerVaultStorage.getStrategyVaultSettings().feePercentage;\\n IERC20[] memory rewardTokens = _rewardTokens();\\n\\n uint256 numRewardTokens = rewardTokens.length;\\n\\n claimedBalances = new uint256[](numRewardTokens);\\n for (uint256 i; i < numRewardTokens; i++) {\\n claimedBalances[i] = rewardTokens[i].balanceOf(address(this));\\n }\\n\\n AURA_REWARD_POOL.getReward(address(this), true);\\n for (uint256 i; i < numRewardTokens; i++) {\\n claimedBalances[i] = rewardTokens[i].balanceOf(address(this)) - claimedBalances[i];\\n\\n if (claimedBalances[i] > 0 && feePercentage != 0 && FEE_RECEIVER != address(0)) {\\n uint256 feeAmount = claimedBalances[i] * feePercentage / BalancerConstants.VAULT_PERCENT_BASIS;\\n rewardTokens[i].checkTransfer(FEE_RECEIVER, feeAmount);\\n claimedBalances[i] -= feeAmount;\\n }\\n }\\n\\n emit BalancerEvents.ClaimedRewardTokens(rewardTokens, claimedBalances);\\n }\\n```\\n\\nWithin the `claimRewardTokens` function, it will call the `AURA_REWARD_POOL.getReward` to harvest the reward tokens. Within the `claimRewardTokens` function, it also uses the pre-balance and post-balance of the reward tokens to check the actual amount of reward tokens that are transferred into the vault.\\nHowever, the issue is that anyone can claim reward tokens from Aura Pool on behalf of any address. Following is the implementation of the `getReward` function taken from Aura's BaseRewardPool4626 contract called by the vault for reference.\\n```\\n/**\\n * @dev Gives a staker their rewards, with the option of claiming extra rewards\\n * @param _account Account for which to claim\\n * @param _claimExtras Get the child rewards too?\\n */\\nfunction getReward(address _account, bool _claimExtras) public updateReward(_account) returns(bool){\\n uint256 reward = earned(_account);\\n if (reward > 0) {\\n rewards[_account] = 0;\\n rewardToken.safeTransfer(_account, reward);\\n IDeposit(operator).rewardClaimed(pid, _account, reward);\\n emit RewardPaid(_account, reward);\\n }\\n\\n //also get rewards from linked rewards\\n if(_claimExtras){\\n for(uint i=0; i < extraRewards.length; i++){\\n IRewards(extraRewards[i]).getReward(_account);\\n }\\n }\\n return true;\\n}\\n\\nmodifier updateReward(address account) {\\n rewardPerTokenStored = rewardPerToken();\\n lastUpdateTime = lastTimeRewardApplicable();\\n if (account != address(0)) {\\n rewards[account] = earned(account);\\n userRewardPerTokenPaid[account] = rewardPerTokenStored;\\n }\\n _;\\n}\\n\\nfunction earned(address account) public view returns (uint256) {\\n return\\n balanceOf(account)\\n .mul(rewardPerToken().sub(userRewardPerTokenPaid[account]))\\n .div(1e18)\\n .add(rewards[account]);\\n}\\n```\\n\\nAssume that a malicious user front runs a call to claim rewards tokens. When a keeper calls the `AURA_REWARD_POOL.getReward` to harvest the reward tokens, it will return no reward tokens, and therefore the difference between the pre-balance and post-balance of the reward tokens will amount to zero. Therefore, no reward tokens will be sent to the `FEE_RECEIVER` (Notional Treasury) as a fee.\\nProof-of-Concept\\nThe `test_claim_rewards_success` test case shows that under normal circumstances, the Notional treasury will receive a portion of the accrued BAL and AURA as fees.\\nThe `test_claim_rewards_success_frontrun` test case shows that if the `getReward` is front-run by an attacker, the Notional treasury will receive nothing.\\nThe following is the test script and its result.\\n```\\nimport pytest\\nfrom brownie import ZERO_ADDRESS, Wei, accounts, interface\\nfrom tests.fixtures import *\\nfrom tests.balancer.helpers import enterMaturity, get_metastable_amounts\\nfrom scripts.common import get_univ3_single_data, get_univ3_batch_data, DEX_ID, TRADE_TYPE\\n\\nchain = Chain()\\n\\ndef test_claim_rewards_success(StratStableETHstETH):\\n (env, vault) = StratStableETHstETH\\n primaryBorrowAmount = 100e8\\n depositAmount = 50e18\\n enterMaturity(env, vault, 1, 0, depositAmount, primaryBorrowAmount, accounts[0])\\n chain.sleep(3600 * 24 * 365)\\n chain.mine()\\n feeReceiver = vault.getStrategyContext()["baseStrategy"]["feeReceiver"]\\n feePercentage = vault.getStrategyContext()["baseStrategy"]["vaultSettings"]["feePercentage"] / 1e2\\n assert env.tokens["BAL"].balanceOf(vault.address) == 0\\n assert env.tokens["AURA"].balanceOf(vault.address) == 0\\n assert env.tokens["BAL"].balanceOf(feeReceiver) == 0\\n assert env.tokens["AURA"].balanceOf(feeReceiver) == 0\\n\\n vault.claimRewardTokens({"from": accounts[1]})\\n\\n # Test that the fee receiver received portion of the rewards as fee\\n assert env.tokens["BAL"].balanceOf(feeReceiver) > 0\\n assert env.tokens["AURA"].balanceOf(feeReceiver) > 0\\n\\ndef test_claim_rewards_success_frontrun(StratStableETHstETH):\\n (env, vault) = StratStableETHstETH\\n primaryBorrowAmount = 100e8\\n depositAmount = 50e18\\n enterMaturity(env, vault, 1, 0, depositAmount, primaryBorrowAmount, accounts[0])\\n chain.sleep(3600 * 24 * 365)\\n chain.mine()\\n feeReceiver = vault.getStrategyContext()["baseStrategy"]["feeReceiver"]\\n feePercentage = vault.getStrategyContext()["baseStrategy"]["vaultSettings"]["feePercentage"] / 1e2\\n assert env.tokens["BAL"].balanceOf(vault.address) == 0\\n assert env.tokens["AURA"].balanceOf(vault.address) == 0\\n assert env.tokens["BAL"].balanceOf(feeReceiver) == 0\\n assert env.tokens["AURA"].balanceOf(feeReceiver) == 0\\n\\n auraPool = interface.IAuraRewardPool(vault.getStrategyContext()["stakingContext"]["auraRewardPool"])\\n auraPool.getReward(vault.address, True, {"from": accounts[5]}) # Attacker frontrun the getReward\\n vault.claimRewardTokens({"from": accounts[1]})\\n\\n # Test that the fee receiver received nothing due the frontrunning\\n assert env.tokens["BAL"].balanceOf(feeReceiver) == 0\\n assert env.tokens["AURA"].balanceOf(feeReceiver) == 0\\n```\\n\\n```\\n❯ brownie test tests/balancer/rewards/test_rewards_stable_eth_steth.py --network mainnet-fork\\nBrownie v1.18.1 - Python development framework for Ethereum\\n\\n=============================================================================================== test session starts ===============================================================================================\\nplatform linux -- Python 3.8.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0\\nplugins: eth-brownie-1.18.1, hypothesis-6.27.3, forked-1.4.0, xdist-1.34.0, web3-5.27.0\\ncollected 2 items \\nAttached to local RPC client listening at '127.0.0.1:8545'// rest of code\\n\\ntests/balancer/rewards/test_rewards_stable_eth_steth.py .. [100%]\\n\\n========================================================================================== 2 passed, 1 warning in 5.72s ===========================================================================================\\n```\\n | It is recommended not to use the pre-balance and post-balance of the reward tokens when claiming reward tokens. A more robust internal accounting scheme needs to be implemented to keep track of actual reward tokens received from the pool so that the appropriate amount of the accrued reward tokens can be sent to the Notional Treasury.\\nReference\\nA similar high-risk issue was found in the past audit report | Notional Treasury will not receive a portion of the accrued reward tokens as fees. Loss of assets for Notional protocol and its governance token holders. | ```\\nFile: AuraStakingMixin.sol\\n function claimRewardTokens() external returns (uint256[] memory claimedBalances) {\\n uint16 feePercentage = BalancerVaultStorage.getStrategyVaultSettings().feePercentage;\\n IERC20[] memory rewardTokens = _rewardTokens();\\n\\n uint256 numRewardTokens = rewardTokens.length;\\n\\n claimedBalances = new uint256[](numRewardTokens);\\n for (uint256 i; i < numRewardTokens; i++) {\\n claimedBalances[i] = rewardTokens[i].balanceOf(address(this));\\n }\\n\\n AURA_REWARD_POOL.getReward(address(this), true);\\n for (uint256 i; i < numRewardTokens; i++) {\\n claimedBalances[i] = rewardTokens[i].balanceOf(address(this)) - claimedBalances[i];\\n\\n if (claimedBalances[i] > 0 && feePercentage != 0 && FEE_RECEIVER != address(0)) {\\n uint256 feeAmount = claimedBalances[i] * feePercentage / BalancerConstants.VAULT_PERCENT_BASIS;\\n rewardTokens[i].checkTransfer(FEE_RECEIVER, feeAmount);\\n claimedBalances[i] -= feeAmount;\\n }\\n }\\n\\n emit BalancerEvents.ClaimedRewardTokens(rewardTokens, claimedBalances);\\n }\\n```\\n |
Balancer Vault Will Receive Fewer Assets As The Current Design Does Not Serve The Interest Of Vault Shareholders | medium | The current implementation of reinvesting reward function does not benefit the vault shareholders as the current design does not serve the vault shareholder's interest well. Thus, this will result in Balancer vaults receiving fewer assets.\\nThe `reinvestReward` function of the Balancer Vaults (MetaStable2TokenAuraVault and Boosted3TokenAuraVault) is permissionless and can be called by anyone. By calling `reinvestReward` function, the vault will trade the reward tokens received by the vault for tokens that are accepted by the balancer pool, and deposit them to the pool to obtain more BPT tokens for the vault shareholders. By continuously reinvesting the reward tokens into the pool, the vault shareholders will be able to lay claim to more BPT tokens per share over time.\\n```\\nFile: MetaStable2TokenAuraHelper.sol\\n function reinvestReward(\\n MetaStable2TokenAuraStrategyContext calldata context,\\n ReinvestRewardParams calldata params\\n ) external {\\n```\\n\\n```\\nFile: Boosted3TokenAuraHelper.sol\\n function reinvestReward(\\n Boosted3TokenAuraStrategyContext calldata context,\\n ReinvestRewardParams calldata params\\n ) external { \\n```\\n\\nThe caller of the `reinvestReward` function can specify the trading configuration such as the DEX (e.g. Uniswap, Curve) that the trade should be executed and the slippage (params.tradeParams.oracleSlippagePercent). Note that the slippage defined must be equal to or less than the `strategyContext.vaultSettings.maxRewardTradeSlippageLimitPercent` setting that is currently set to 5% within the test scripts.\\nNotional Vaults support trading in multiple DEX protocols (Curve, Balancer V2, Uniswap V2 & V3 and 0x). Since `reinvestReward` function is callable by anyone, the liquidity provider of the supported DEX protocols will want the trade to be executed on the DEX pool that they invested on. This will allow them to earn an additional transaction fee from the trade. The amount of transaction fee earned will be significant if the volume is large when there are many vaults and reward tokens to be reinvested. In addition, the caller will set the slippage to the maximum configurable threshold (e.g. 5% in this example) to maximize the profit. Therefore, this will end up having various liquidity providers front-running each other to ensure that their `reinvestReward` transaction gets executed in order to extract value. | It is recommended to implement access control on the `reinvestReward` function to ensure that this function can only be triggered by Notional who has the best interest of its vault users.\\nAlso, consider sending the `reinvestReward` transaction as a private transaction via Flashbot so that the attacker cannot perform any kind of sandwich attack on the reinvest rewards transaction. | This does not serve the vault shareholder's interest well as the caller of the `reinvestReward` function will not be trading and reinvesting in an optimal way that maximizes the value of the shareholder's assets in the vaults. There is a misalignment in the objective between the vault shareholders and callers. Therefore, the vault and its users will end up on the losing end and receive fewer assets than they should. | ```\\nFile: MetaStable2TokenAuraHelper.sol\\n function reinvestReward(\\n MetaStable2TokenAuraStrategyContext calldata context,\\n ReinvestRewardParams calldata params\\n ) external {\\n```\\n |
Existing Slippage Control Can Be Bypassed During Vault Settlement | medium | The existing slippage control can be bypassed/disabled during vault settlement, thus allowing the trade to be executed without consideration of its slippage.\\nNote 1: This issue affects MetaStable2 and Boosted3 balancer leverage vaults\\nNote 2: This issue affects the following three (3) processes. However, the root cause and the remediation action are the same for all. Therefore, only the PoC for the "Emergency vault settlement" process will be documented in this report, and the other two processes will be omitted for brevity. Refer to "Appendix I - Normal and Post Maturity Vault Settlement" for more details.\\nEmergency vault settlement\\nNormal vault settlement\\nPost-Maturity vault settlement.\\nNote 3: The issue affects all the supported DEXs (Curve, Balancer V2, Uniswap V2, Uniswap V3 and 0x) within Notional\\nThe `emergencySettlementSlippageLimitPercent` of the vault is set to 10% as per the environment file provided by Notional.\\n```\\nFile: BalancerEnvironment.py\\n "postMaturitySettlementSlippageLimitPercent": 10e6, # 10%\\n "emergencySettlementSlippageLimitPercent": 10e6, # 10%\\n```\\n\\nWhen a user calls the `settleVaultEmergency` function, the vault will validate that the slippage (DynamicTradeParams.oracleSlippagePercent) defined by the caller is within the acceptable slippage range by calling `SettlementUtils._decodeParamsAndValidate` function.\\n```\\nFile: MetaStable2TokenAuraHelper.sol\\n function settleVaultEmergency(\\n MetaStable2TokenAuraStrategyContext calldata context, \\n uint256 maturity, \\n bytes calldata data\\n ) external {\\n RedeemParams memory params = SettlementUtils._decodeParamsAndValidate(\\n context.baseStrategy.vaultSettings.emergencySettlementSlippageLimitPercent,\\n data\\n );\\n\\n uint256 bptToSettle = context.baseStrategy._getEmergencySettlementParams({\\n poolContext: context.poolContext.basePool, \\n maturity: maturity, \\n totalBPTSupply: IERC20(context.poolContext.basePool.pool).totalSupply()\\n });\\n```\\n\\nThe `SettlementUtils._decodeParamsAndValidate` function will validate that the slippage (DynamicTradeParams.oracleSlippagePercent) passed in by the caller does not exceed the designated threshold (10%). In Line 41-42, the transaction will revert if the `DynamicTradeParams.oracleSlippagePercent` exceeds the `slippageLimitPercent`. Note that `slippageLimitPercent` is equal to `emergencySettlementSlippageLimitPercent` which is `10%`.\\nThere is an edge case with the condition at Line 41. Consider the following cases:\\nIf `callbackData.oracleSlippagePercent` = 9% and `slippageLimitPercent` = 10%, the condition will evaluate as `False` and transaction will not revert\\nIf `callbackData.oracleSlippagePercent` = 11% and `slippageLimitPercent` = 10%, the condition will evaluate as `True` and transaction will revert because it exceeds the designated threshold.\\nIf `callbackData.oracleSlippagePercent` = 0% and `slippageLimitPercent` = 10%, the condition will evaluate as `False` and transaction will not revert\\nThe problem is that when `callbackData.oracleSlippagePercent` is `0%`, this effectively means that there is no slippage limit. This essentially exceeded the designated threshold (10%), and the transaction should revert instead, but it did not.\\n```\\nFile: SettlementUtils.sol\\n /// @notice Validates that the slippage passed in by the caller\\n /// does not exceed the designated threshold.\\n /// @param slippageLimitPercent configured limit on the slippage from the oracle price allowed\\n /// @param data trade parameters passed into settlement\\n /// @return params abi decoded redemption parameters\\n function _decodeParamsAndValidate(\\n uint32 slippageLimitPercent,\\n bytes memory data\\n ) internal view returns (RedeemParams memory params) {\\n params = abi.decode(data, (RedeemParams));\\n DynamicTradeParams memory callbackData = abi.decode(\\n params.secondaryTradeParams, (DynamicTradeParams)\\n );\\n\\n if (callbackData.oracleSlippagePercent > slippageLimitPercent) {\\n revert Errors.SlippageTooHigh(callbackData.oracleSlippagePercent, slippageLimitPercent);\\n }\\n }\\n```\\n\\nWithin `executeTradeWithDynamicSlippage` function, it will calculate the `trade.limit` by calling the `PROXY.getLimitAmount`. The `trade.limit` is the maximum amount of sellToken that can be sold OR the minimum amount of buyToken the contract is expected to receive from the DEX depending on whether you are performing a sell or buy.\\n```\\nFile: TradingModule.sol\\n function executeTradeWithDynamicSlippage(\\n uint16 dexId,\\n Trade memory trade,\\n uint32 dynamicSlippageLimit\\n ) external override returns (uint256 amountSold, uint256 amountBought) {\\n // This method calls back into the implementation via the proxy so that it has proper\\n // access to storage.\\n trade.limit = PROXY.getLimitAmount(\\n trade.tradeType,\\n trade.sellToken,\\n trade.buyToken,\\n trade.amount,\\n dynamicSlippageLimit\\n );\\n```\\n\\nWithin the `TradingUtils._getLimitAmount` function, when the `slippageLimit` is set to `0`,\\nIf it is a sell trade, the `limitAmount` will be set to `type(uint256).max`. See Line 187\\nIf it is a buy trade, the `limitAmount` will be set to `0`. See Line 207\\nThese effectively remove the slippage limit. Therefore, a malicious user can specify the `callbackData.oracleSlippagePercent` to be `0%` to bypass the slippage validation check.\\n```\\nFile: TradingUtils.sol\\n function _getLimitAmount(\\n TradeType tradeType,\\n address sellToken,\\n address buyToken,\\n uint256 amount,\\n uint32 slippageLimit,\\n uint256 oraclePrice,\\n uint256 oracleDecimals\\n ) internal view returns (uint256 limitAmount) {\\n uint256 sellTokenDecimals = 10 **\\n (\\n sellToken == Deployments.ETH_ADDRESS\\n ? 18\\n : IERC20(sellToken).decimals()\\n );\\n uint256 buyTokenDecimals = 10 **\\n (\\n buyToken == Deployments.ETH_ADDRESS\\n ? 18\\n : IERC20(buyToken).decimals()\\n );\\n\\n if (tradeType == TradeType.EXACT_OUT_SINGLE || tradeType == TradeType.EXACT_OUT_BATCH) {\\n // 0 means no slippage limit\\n if (slippageLimit == 0) {\\n return type(uint256).max;\\n }\\n // For exact out trades, we need to invert the oracle price (1 / oraclePrice)\\n // We increase the precision before we divide because oraclePrice is in\\n // oracle decimals\\n oraclePrice = (oracleDecimals * oracleDecimals) / oraclePrice;\\n // For exact out trades, limitAmount is the max amount of sellToken the DEX can\\n // pull from the contract\\n limitAmount =\\n ((oraclePrice + \\n ((oraclePrice * uint256(slippageLimit)) /\\n Constants.SLIPPAGE_LIMIT_PRECISION)) * amount) / \\n oracleDecimals;\\n\\n // limitAmount is in buyToken precision after the previous calculation,\\n // convert it to sellToken precision\\n limitAmount = (limitAmount * sellTokenDecimals) / buyTokenDecimals;\\n } else {\\n // 0 means no slippage limit\\n if (slippageLimit == 0) {\\n return 0;\\n }\\n // For exact in trades, limitAmount is the min amount of buyToken the contract\\n // expects from the DEX\\n limitAmount =\\n ((oraclePrice -\\n ((oraclePrice * uint256(slippageLimit)) /\\n Constants.SLIPPAGE_LIMIT_PRECISION)) * amount) /\\n oracleDecimals;\\n\\n // limitAmount is in sellToken precision after the previous calculation,\\n // convert it to buyToken precision\\n limitAmount = (limitAmount * buyTokenDecimals) / sellTokenDecimals;\\n }\\n }\\n```\\n\\nProof-of-Concept\\nThe following test case shows that when the slippage is set to 11% (11e6), the transaction will be reverted and fails the test. This is working as intended because the slippage (11%) exceeded the threshold (emergencySettlementSlippageLimitPercent = 10%).\\n```\\ndef test_emergency_single_maturity_success(StratBoostedPoolUSDCPrimary):\\n (env, vault) = StratBoostedPoolUSDCPrimary\\n primaryBorrowAmount = 5000e8\\n depositAmount = 10000e6\\n env.tokens["USDC"].approve(env.notional, 2 ** 256 - 1, {"from": env.whales["USDC"]})\\n maturity = enterMaturity(env, vault, 2, 0, depositAmount, primaryBorrowAmount, env.whales["USDC"])\\n strategyContext = vault.getStrategyContext()\\n settings = dict(strategyContext["baseStrategy"]["vaultSettings"].dict())\\n settings["maxBalancerPoolShare"] = 0\\n vault.setStrategyVaultSettings(\\n list(settings.values()), \\n {"from": env.notional.owner()}\\n )\\n # minPrimary is calculated internally for boosted pools \\n redeemParams = get_redeem_params(0, 0, \\n get_dynamic_trade_params(\\n DEX_ID["UNISWAP_V3"], TRADE_TYPE["EXACT_IN_SINGLE"], 11e6, True, get_univ3_single_data(3000)\\n )\\n )\\n vault.settleVaultEmergency(maturity, redeemParams, {"from": env.notional.owner()})\\n vaultState = env.notional.getVaultState(vault.address, maturity)\\n assert vaultState["totalStrategyTokens"] == 0\\n```\\n\\n```\\n❯ brownie test tests/balancer/settlement/test_settlement_boosted_usdc.py --network mainnet-fork\\nBrownie v1.18.1 - Python development framework for Ethereum\\n\\n=============================================================================================== test session starts ===============================================================================================\\nplatform linux -- Python 3.8.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0\\nplugins: eth-brownie-1.18.1, hypothesis-6.27.3, forked-1.4.0, xdist-1.34.0, web3-5.27.0\\ncollected 1 item \\nAttached to local RPC client listening at '127.0.0.1:8545'// rest of code\\n\\ntests/balancer/settlement/test_settlement_boosted_usdc.py F [100%]\\n\\n==================================================================================================== FAILURES =====================================================================================================\\n```\\n\\nThe following test case shows that when the slippage is set to 0, the transaction does not revert and passes the test. This is not working as intended because having no slippage (0) technically exceeded the threshold (emergencySettlementSlippageLimitPercent = 10%).\\n```\\ndef test_emergency_single_maturity_success(StratBoostedPoolUSDCPrimary):\\n (env, vault) = StratBoostedPoolUSDCPrimary\\n primaryBorrowAmount = 5000e8\\n depositAmount = 10000e6\\n env.tokens["USDC"].approve(env.notional, 2 ** 256 - 1, {"from": env.whales["USDC"]})\\n maturity = enterMaturity(env, vault, 2, 0, depositAmount, primaryBorrowAmount, env.whales["USDC"])\\n strategyContext = vault.getStrategyContext()\\n settings = dict(strategyContext["baseStrategy"]["vaultSettings"].dict())\\n settings["maxBalancerPoolShare"] = 0\\n vault.setStrategyVaultSettings(\\n list(settings.values()), \\n {"from": env.notional.owner()}\\n )\\n # minPrimary is calculated internally for boosted pools \\n redeemParams = get_redeem_params(0, 0, \\n get_dynamic_trade_params(\\n DEX_ID["UNISWAP_V3"], TRADE_TYPE["EXACT_IN_SINGLE"], 0, True, get_univ3_single_data(3000)\\n )\\n )\\n vault.settleVaultEmergency(maturity, redeemParams, {"from": env.notional.owner()})\\n vaultState = env.notional.getVaultState(vault.address, maturity)\\n assert vaultState["totalStrategyTokens"] == 0\\n```\\n\\n```\\n❯ brownie test tests/balancer/settlement/test_settlement_boosted_usdc.py --network mainnet-fork\\nBrownie v1.18.1 - Python development framework for Ethereum\\n\\n=============================================================================================== test session starts ===============================================================================================\\nplatform linux -- Python 3.8.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0\\nplugins: eth-brownie-1.18.1, hypothesis-6.27.3, forked-1.4.0, xdist-1.34.0, web3-5.27.0\\ncollected 1 item \\nAttached to local RPC client listening at '127.0.0.1:8545'// rest of code\\n\\ntests/balancer/settlement/test_settlement_boosted_usdc.py . [100%]\\n\\n========================================================================================== 1 passed, 1 warning in 4.31s ===========================================================================================\\n```\\n | Update the `SettlementUtils._decodeParamsAndValidate` function to revert if the slippage is set to zero.\\n```\\nFile: SettlementUtils.sol\\n /// @notice Validates that the slippage passed in by the caller\\n /// does not exceed the designated threshold.\\n /// @param slippageLimitPercent configured limit on the slippage from the oracle price allowed\\n /// @param data trade parameters passed into settlement\\n /// @return params abi decoded redemption parameters\\n function _decodeParamsAndValidate(\\n uint32 slippageLimitPercent,\\n bytes memory data\\n ) internal view returns (RedeemParams memory params) {\\n params = abi.decode(data, (RedeemParams));\\n DynamicTradeParams memory callbackData = abi.decode(\\n params.secondaryTradeParams, (DynamicTradeParams)\\n );\\n\\n// Remove the line below\\n if (callbackData.oracleSlippagePercent > slippageLimitPercent) {\\n// Add the line below\\n if (callbackData.oracleSlippagePercent == 0 || callbackData.oracleSlippagePercent > slippageLimitPercent) {\\n revert Errors.SlippageTooHigh(callbackData.oracleSlippagePercent, slippageLimitPercent);\\n }\\n }\\n```\\n\\nAppendix I - Normal and Post Maturity Vault Settlement\\nThe `settlementSlippageLimitPercent` and `postMaturitySettlementSlippageLimitPercent` of the vault are set to 5% and 10% respectively per the environment file provided by Notional.\\n```\\nFile: BalancerEnvironment.py\\n "settlementSlippageLimitPercent": 5e6, # 5%\\n "postMaturitySettlementSlippageLimitPercent": 10e6, # 10%\\n```\\n\\nWhen a user calls the `settleVaultNormal` or `settleVaultPostMaturity` function, the vault will validate that the slippage (DynamicTradeParams.oracleSlippagePercent) defined by the caller is within the acceptable slippage range by calling `SettlementUtils._decodeParamsAndValidate` function.\\n```\\nFile: MetaStable2TokenAuraVault.sol\\n function settleVaultNormal(\\n uint256 maturity,\\n uint256 strategyTokensToRedeem,\\n bytes calldata data\\n ) external {\\n if (maturity <= block.timestamp) {\\n revert Errors.PostMaturitySettlement();\\n }\\n if (block.timestamp < maturity - SETTLEMENT_PERIOD_IN_SECONDS) {\\n revert Errors.NotInSettlementWindow();\\n }\\n MetaStable2TokenAuraStrategyContext memory context = _strategyContext();\\n SettlementUtils._validateCoolDown(\\n context.baseStrategy.vaultState.lastSettlementTimestamp,\\n context.baseStrategy.vaultSettings.settlementCoolDownInMinutes\\n );\\n RedeemParams memory params = SettlementUtils._decodeParamsAndValidate(\\n context.baseStrategy.vaultSettings.settlementSlippageLimitPercent,\\n data\\n );\\n MetaStable2TokenAuraHelper.settleVault(\\n context, maturity, strategyTokensToRedeem, params\\n );\\n context.baseStrategy.vaultState.lastSettlementTimestamp = uint32(block.timestamp);\\n context.baseStrategy.vaultState.setStrategyVaultState();\\n }\\n\\n function settleVaultPostMaturity(\\n uint256 maturity,\\n uint256 strategyTokensToRedeem,\\n bytes calldata data\\n ) external onlyNotionalOwner {\\n if (block.timestamp < maturity) {\\n revert Errors.HasNotMatured();\\n }\\n MetaStable2TokenAuraStrategyContext memory context = _strategyContext();\\n SettlementUtils._validateCoolDown(\\n context.baseStrategy.vaultState.lastPostMaturitySettlementTimestamp,\\n context.baseStrategy.vaultSettings.postMaturitySettlementCoolDownInMinutes\\n );\\n RedeemParams memory params = SettlementUtils._decodeParamsAndValidate(\\n context.baseStrategy.vaultSettings.postMaturitySettlementSlippageLimitPercent,\\n data\\n );\\n MetaStable2TokenAuraHelper.settleVault(\\n context, maturity, strategyTokensToRedeem, params\\n );\\n context.baseStrategy.vaultState.lastPostMaturitySettlementTimestamp = uint32(block.timestamp); \\n context.baseStrategy.vaultState.setStrategyVaultState(); \\n }\\n```\\n\\nSince the same vulnerable `SettlementUtils._decodeParamsAndValidate` function is being used here, the `settleVaultNormal` and `settleVaultPostMaturity` functions are affected by this issue too. | Malicious users can trigger the permissionless `settleVaultEmergency` function and cause the trade to suffer huge slippage. This results in loss of assets for the vaults and their users. | ```\\nFile: BalancerEnvironment.py\\n "postMaturitySettlementSlippageLimitPercent": 10e6, # 10%\\n "emergencySettlementSlippageLimitPercent": 10e6, # 10%\\n```\\n |
Rely On Balancer Oracle Which Is Not Updated Frequently | medium | The vault relies on Balancer Oracle which is not updated frequently.\\nNote: This issue affects the MetaStable2 balancer leverage vault\\nThe issue is that this pool only handled ~1.5 transactions per day based on the last 5 days' data. In terms of average, the price will only be updated once every 16 hours. There are also many days that there is only 1 transaction. The following shows the number of transactions for each day within the audit period.\\n5 Oct 2022 - 3 transactions\\n4 Oct 2022 - 1 transaction\\n3 Oct 2022 - 1 transaction\\n2 Oct 2022 - 2 transactions\\n1 Oct 2022 - 1 transaction\\nNote that the price will only be updated whenever a transaction (e.g. swap) within the Balancer pool is triggered. Due to the lack of updates, the price provided by Balancer Oracle will not reflect the true value of the assets. Considering the stETH/ETH Balancer pool, the price of the stETH or ETH provided will not reflect the true value in the market.\\n```\\nFile: TwoTokenPoolUtils.sol\\n /// @notice Gets the oracle price pair price between two tokens using a weighted\\n /// average between a chainlink oracle and the balancer TWAP oracle.\\n /// @param poolContext oracle context variables\\n /// @param oracleContext oracle context variables\\n /// @param tradingModule address of the trading module\\n /// @return oraclePairPrice oracle price for the pair in 18 decimals\\n function _getOraclePairPrice(\\n TwoTokenPoolContext memory poolContext,\\n OracleContext memory oracleContext, \\n ITradingModule tradingModule\\n ) internal view returns (uint256 oraclePairPrice) {\\n // NOTE: this balancer price is denominated in 18 decimal places\\n uint256 balancerWeightedPrice;\\n if (oracleContext.balancerOracleWeight > 0) {\\n uint256 balancerPrice = BalancerUtils._getTimeWeightedOraclePrice(\\n address(poolContext.basePool.pool),\\n IPriceOracle.Variable.PAIR_PRICE,\\n oracleContext.oracleWindowInSeconds\\n );\\n\\n if (poolContext.primaryIndex == 1) {\\n // If the primary index is the second token, we need to invert\\n // the balancer price.\\n balancerPrice = BalancerConstants.BALANCER_PRECISION_SQUARED / balancerPrice;\\n }\\n\\n balancerWeightedPrice = balancerPrice * oracleContext.balancerOracleWeight;\\n }\\n\\n uint256 chainlinkWeightedPrice;\\n if (oracleContext.balancerOracleWeight < BalancerConstants.BALANCER_ORACLE_WEIGHT_PRECISION) {\\n (int256 rate, int256 decimals) = tradingModule.getOraclePrice(\\n poolContext.primaryToken, poolContext.secondaryToken\\n );\\n require(rate > 0);\\n require(decimals >= 0);\\n\\n if (uint256(decimals) != BalancerConstants.BALANCER_PRECISION) {\\n rate = (rate * int256(BalancerConstants.BALANCER_PRECISION)) / decimals;\\n }\\n\\n // No overflow in rate conversion, checked above\\n chainlinkWeightedPrice = uint256(rate) * \\n (BalancerConstants.BALANCER_ORACLE_WEIGHT_PRECISION - oracleContext.balancerOracleWeight);\\n }\\n\\n oraclePairPrice = (balancerWeightedPrice + chainlinkWeightedPrice) / \\n BalancerConstants.BALANCER_ORACLE_WEIGHT_PRECISION;\\n }\\n```\\n\\n```\\nFile: BalancerUtils.sol\\n function _getTimeWeightedOraclePrice(\\n address pool,\\n IPriceOracle.Variable variable,\\n uint256 secs\\n ) internal view returns (uint256) {\\n IPriceOracle.OracleAverageQuery[]\\n memory queries = new IPriceOracle.OracleAverageQuery[](1);\\n\\n queries[0].variable = variable;\\n queries[0].secs = secs;\\n queries[0].ago = 0; // now\\n\\n // Gets the balancer time weighted average price denominated in the first token\\n return IPriceOracle(pool).getTimeWeightedAverage(queries)[0];\\n }\\n```\\n | Although it is not possible to obtain a price pair that truly reflects the true value of an asset in the real world, the vault should attempt to minimize inaccuracy and slippage as much as possible. This can be done by choosing and using a more accurate Oracle that is updated more frequently instead of using the Balancer Oracle that is infrequently updated.\\nChainlink should be used as the primary Oracle for price pair. If a secondary Oracle is needed for a price pair, consider using Teller Oracle instead of Balancer Oracle. Some example of how Chainlink and Tellor works together in a live protocol can be found here\\nObtaining the time-weight average price of BTP LP token from Balancer Oracle is fine as the Balancer pool is the source of truth. However, getting the price of ETH or stETH from Balancer Oracle would not be a good option.\\nOn a side note, it was observed that the weightage of the price pair is Balancer Oracle - 60% and Chainlink - 40%. Thus, this theoretically will reduce the impact of inaccurate prices provided by Balancer Oracle by around half. However, the team should still consider using a better Oracle as almost all the functions within the vault depends on the accurate price of underlying assets to operate.\\nNote: For the stETH/ETH balancer leverage vault, the price pair is computed based on a weighted average of Balancer Oracle and Chainlink. Based on the test script, the weightage is Balancer Oracle - 60% and Chainlink - 40%.\\n```\\nFile: BalancerEnvironment.py\\n "maxRewardTradeSlippageLimitPercent": 5e6,\\n "balancerOracleWeight": 0.6e4, # 60%\\n "settlementCoolDownInMinutes": 60 * 6, # 6 hour settlement cooldown\\n```\\n | The price provided by the function will not reflect the true value of the assets. It might be overvalued or undervalued. The affected function is being used in almost all functions within the vault. For instance, this function is part of the critical `_convertStrategyToUnderlying` function that computes the value of the strategy token in terms of its underlying assets. As a result, it might cause the following:\\nVault Settlement - Vault settlement requires computing the underlying value of the strategy tokens. It involves dealing with a large number of assets, and thus even a slight slippage in the price will be significantly amplified.\\nDeleverage/Liquidation of Account - If the price provided does not reflect the true value, users whose debt ratio is close to the liquidation threshold might be pre-maturely deleveraged/liquidated since their total asset value might be undervalued.\\nBorrowing - If the price provided does not reflect the true value, it might be possible that the assets of some users might be overvalued, and thus they are able to over-borrow from the vault. | ```\\nFile: TwoTokenPoolUtils.sol\\n /// @notice Gets the oracle price pair price between two tokens using a weighted\\n /// average between a chainlink oracle and the balancer TWAP oracle.\\n /// @param poolContext oracle context variables\\n /// @param oracleContext oracle context variables\\n /// @param tradingModule address of the trading module\\n /// @return oraclePairPrice oracle price for the pair in 18 decimals\\n function _getOraclePairPrice(\\n TwoTokenPoolContext memory poolContext,\\n OracleContext memory oracleContext, \\n ITradingModule tradingModule\\n ) internal view returns (uint256 oraclePairPrice) {\\n // NOTE: this balancer price is denominated in 18 decimal places\\n uint256 balancerWeightedPrice;\\n if (oracleContext.balancerOracleWeight > 0) {\\n uint256 balancerPrice = BalancerUtils._getTimeWeightedOraclePrice(\\n address(poolContext.basePool.pool),\\n IPriceOracle.Variable.PAIR_PRICE,\\n oracleContext.oracleWindowInSeconds\\n );\\n\\n if (poolContext.primaryIndex == 1) {\\n // If the primary index is the second token, we need to invert\\n // the balancer price.\\n balancerPrice = BalancerConstants.BALANCER_PRECISION_SQUARED / balancerPrice;\\n }\\n\\n balancerWeightedPrice = balancerPrice * oracleContext.balancerOracleWeight;\\n }\\n\\n uint256 chainlinkWeightedPrice;\\n if (oracleContext.balancerOracleWeight < BalancerConstants.BALANCER_ORACLE_WEIGHT_PRECISION) {\\n (int256 rate, int256 decimals) = tradingModule.getOraclePrice(\\n poolContext.primaryToken, poolContext.secondaryToken\\n );\\n require(rate > 0);\\n require(decimals >= 0);\\n\\n if (uint256(decimals) != BalancerConstants.BALANCER_PRECISION) {\\n rate = (rate * int256(BalancerConstants.BALANCER_PRECISION)) / decimals;\\n }\\n\\n // No overflow in rate conversion, checked above\\n chainlinkWeightedPrice = uint256(rate) * \\n (BalancerConstants.BALANCER_ORACLE_WEIGHT_PRECISION - oracleContext.balancerOracleWeight);\\n }\\n\\n oraclePairPrice = (balancerWeightedPrice + chainlinkWeightedPrice) / \\n BalancerConstants.BALANCER_ORACLE_WEIGHT_PRECISION;\\n }\\n```\\n |
Attackers Can DOS Balancer Vaults By Bypassing The BPT Threshold | medium | Malicious users can lock up all the leverage vaults offered by Notional causing denial-of-service by bypassing the BPT threshold and subseqently trigger an emergency settlement against the vaults.\\nThe current BPT threshold is set to 20% of the total BTP supply based on the environment file provided during the audit.\\n```\\nFile: BalancerEnvironment.py\\n "oracleWindowInSeconds": 3600,\\n "maxBalancerPoolShare": 2e3, # 20%\\n "settlementSlippageLimitPercent": 5e6, # 5%\\n```\\n\\n```\\nFile: BalancerVaultStorage.sol\\n function _bptThreshold(StrategyVaultSettings memory strategyVaultSettings, uint256 totalBPTSupply) \\n internal pure returns (uint256) {\\n return (totalBPTSupply * strategyVaultSettings.maxBalancerPoolShare) / BalancerConstants.VAULT_PERCENT_BASIS;\\n }\\n```\\n\\nWhen the total number of BPT owned by the vault exceeds the BPT threshold, no one will be able to enter the vault as per the require check at Line 295-296 within the `TwoTokenPoolUtils._joinPoolAndStake` function.\\n```\\nFile: TwoTokenPoolUtils.sol\\n function _joinPoolAndStake(\\n TwoTokenPoolContext memory poolContext,\\n StrategyContext memory strategyContext,\\n AuraStakingContext memory stakingContext,\\n uint256 primaryAmount,\\n uint256 secondaryAmount,\\n uint256 minBPT\\n ) internal returns (uint256 bptMinted) {\\n // prettier-ignore\\n PoolParams memory poolParams = poolContext._getPoolParams( \\n primaryAmount, \\n secondaryAmount,\\n true // isJoin\\n );\\n\\n bptMinted = BalancerUtils._joinPoolExactTokensIn({\\n context: poolContext.basePool,\\n params: poolParams,\\n minBPT: minBPT\\n });\\n\\n // Check BPT threshold to make sure our share of the pool is\\n // below maxBalancerPoolShare\\n uint256 bptThreshold = strategyContext.vaultSettings._bptThreshold(\\n poolContext.basePool.pool.totalSupply()\\n );\\n uint256 bptHeldAfterJoin = strategyContext.totalBPTHeld + bptMinted;\\n if (bptHeldAfterJoin > bptThreshold)\\n revert Errors.BalancerPoolShareTooHigh(bptHeldAfterJoin, bptThreshold);\\n\\n // Transfer token to Aura protocol for boosted staking\\n stakingContext.auraBooster.deposit(stakingContext.auraPoolId, bptMinted, true); // stake = true\\n }\\n```\\n\\nAnother key point that is critical for this issue is that when the total number of BPT owned by the vault exceeds the BPT threshold, an emergency settlement can be triggered against the vault and anyone can triggered it as it is permissionless. A major side-effect of an emergency settlement is that the vault will be locked up after the emergency settlement. No one is allowed to enter the vault and users are only allowed to exit from the vault by taking their proportional share of cash and strategy tokens. The reason is that after the emergency settlement, there will be some asset cash balance in the vault and this will cause no one to be able to enter the vault due to the require check at Line 218. This side-effect has been verified by reviewing the codebase and clarifying with the sponsors.\\n```\\nFile: VaultState.sol\\n function enterMaturity(\\n VaultState memory vaultState,\\n VaultAccount memory vaultAccount,\\n VaultConfig memory vaultConfig,\\n uint256 strategyTokenDeposit,\\n uint256 additionalUnderlyingExternal,\\n bytes calldata vaultData\\n ) internal returns (uint256 strategyTokensAdded) {\\n // If the vault state is holding asset cash this would mean that there is some sort of emergency de-risking\\n // event or the vault is in the process of settling debts. In both cases, we do not allow accounts to enter\\n // the vault.\\n require(vaultState.totalAssetCash == 0);\\n```\\n\\nIf an attacker could force an emergency settlement on a vault anytime, he would be able to perform a DOS on the vault since the vault will basically be locked up after it. The following demonstrates how this can be performed:\\nAssume that the total supply of BTP in the WETH/stETH Balancer Pool is 100,000 Therefore, the BPT threshold of the vault will be 20,000.\\nAssume that the total number of BPT held by the vault is 19,900.\\nNote that under normal circumstances, it is not possible for the users to exceed the BPT threshold because the transaction will revert if the `bptHeldAfterJoin > bptThreshold` after the user enters the vault.\\nNote that at this point, the emergency settlement CANNOT be triggered against the vault because the vault has not exceeded BPT threshold yet\\nBob (attacker) flash-loans a large amount of ETH from dydx where the fee is almost non-existence (1 Wei Only)\\nBob allocates a portion of his ETH to join the WETH/stETH Balancer Pool. This will cause the total supply of BPT to increase significantly to 200,000.\\nBob allocates a portion of his ETH to enter the vault and causes the total number of BPT held by the vault to increase by 150 from 19,900 to 20,050. This is allowed because the total supply of BPT has increased to 200,000, and thus the BPT threshold has increased to 40,000. Also, Bob does not leverage himself and does not borrow from Notional since the flash loan already provided him with access to a large number of funds, and thus he does not need to pay for any borrowing cost to minimize the cost of this attack.\\nAt this point, due to the inflow of 150 BPT to the Balancer Pool, the total supply of BPT increase from 200,000 to 200,150.\\nAfter entering the vault, Bob exits the WETH/stETH Balancer Pool entirely with all his 100,000 BPT position. This will cause the total supply of BPT to fall back to 100,150. Per my research, there is no exit fee when a Liquidity Provider exits a pool. Also, a Liquidity Provider only suffers a loss due to impermanent loss. However, since all these steps are executed within the same transaction, there is no impermanent loss because no one perform any token swap. Thus, there is no cost incurred by Bob for this step.\\nNote that at this point, the emergency settlement CAN be triggered against the vault because the vault has exceeded the BPT threshold. The total number of BPT held by the vault is 20,050, and the BPT threshold is 20,030 (=100,150 * 0.2).\\nAnyone can trigger the emergency settlement as it is permissionless. Bob triggered an emergency settlement against the vault, and 20 BPT will be sold off in the market so that the vault will not exceed the BPT threshold. It is important to ensure that the number of BPTs to be sold is kept as low as possible so that the total value of the vault will not be reduced by slippage during the trade. This is because Bob still owns the shares of the vault and he wants to get back as much of his original deposit as possible later. This value can be optimized further with Math.\\nAs mentioned earlier, after an emergency settlement, the vault will be locked up. No one is allowed to enter the vault and users are only allowed to exit from the vault by taking their proportional share of cash and strategy tokens.\\nBob proceeds to redeem all his shares from the vault. He will get back all of his deposits minus the 20 BPT slippage loss during the emergency settlement that is split proportionally among all vault shareholders which is insignificant. Note that the Notional's leverage vault does not impose any exit fee.\\nBob proceeds to repay back his loan and pay 1 wei as the fee to dydx.\\nThe cost of attack is 1 wei (flash-loan fee) + 20 BPT slippage loss during the emergency settlement that is split proportionally among all vault shareholders, which is insignificant. The slippage loss during emergency settlement can be minimized by causing the total number of BPT held by the vault to exceed the BPT threshold by the smallest possible value.\\nAll the above steps will be executed within a single block/transaction. | Short term, consider the following measures to mitigate the issue:\\nThe emergency settlement function is permissionless and can be called by anyone. It is recommended to implement access control to ensure that this function can only be triggered by Notional.\\nThere is no withdrawal fee. Also, there is no deposit fee as long as users did not borrow from Notional. Therefore, this attack is mostly profitable. It is recommended to impose a fee on the users of the vault even if the users did not borrow from Notional. All users should be charged a fee for the use of the vault. This will make the attack less likely to be profitable in most cases.\\nUsers can enter and exit the vault within the same transaction/block. This allows the attacker to leverage the flash-loan facility to reduce the cost of the attack to almost nothing. It is recommended to prevent users from entering and exiting the vault within the same transaction/block. If the user entered the vault in this block, he/she could only exit at the next block.\\nLong term, update the implementation of the vault so that the vault will not be locked up after an emergency settlement. After selling off the excess BPT, the vault should allow users to enter the vault as per normal. | Malicious users can lock up all the leverage vaults offered by Notional causing denial-of-service. This results in a loss of funds for the protocol as the vault is no longer generating profit for the protocol, and also a loss of funds for vault users as they cannot realize the profits because they are forced to exit the vault prematurely.\\nThe following are various reasons why someone would want to perform a DOS on Notional vaults:\\nDamage the reputation of Notional, and reduce users' trust in Notional\\nA competitor who is also offering a leverage vault attempts to bring down Notional\\nSomeone who shorted Notional's protocol token | ```\\nFile: BalancerEnvironment.py\\n "oracleWindowInSeconds": 3600,\\n "maxBalancerPoolShare": 2e3, # 20%\\n "settlementSlippageLimitPercent": 5e6, # 5%\\n```\\n |
Corruptible Upgradability Pattern | medium | Storage of Boosted3TokenAuraVault and MetaStable2TokenAuraVault vaults might be corrupted during an upgrade.\\nFollowing are the inheritance of the Boosted3TokenAuraVault and MetaStable2TokenAuraVault vaults.\\nNote: The contracts highlighted in Orange mean that there are no gap slots defined. The contracts highlighted in Green mean that gap slots have been defined\\nInheritance of the MetaStable2TokenAuraVault vault\\n```\\ngraph BT;\\n classDef nogap fill:#f96;\\n classDef hasgap fill:#99cc00;\\n MetaStable2TokenAuraVault-->MetaStable2TokenVaultMixin:::nogap\\n MetaStable2TokenVaultMixin:::nogap-->TwoTokenPoolMixin:::nogap\\n MetaStable2TokenVaultMixin:::nogap-->BalancerOracleMixin:::nogap\\n TwoTokenPoolMixin:::nogap-->PoolMixin:::nogap\\n PoolMixin:::nogap-->AuraStakingMixin:::nogap\\n PoolMixin:::nogap-->BalancerStrategyBase;\\n BalancerStrategyBase:::hasgap-->BaseStrategyVault:::hasgap\\n BalancerStrategyBase:::hasgap-->UUPSUpgradeable\\n```\\n\\nInheritance of the Boosted3TokenAuraVault vault\\n```\\ngraph BT;\\n classDef nogap fill:#f96;\\n classDef hasgap fill:#99cc00;\\n Boosted3TokenAuraVault-->Boosted3TokenPoolMixin:::nogap\\n Boosted3TokenPoolMixin:::nogap-->PoolMixin:::nogap\\n PoolMixin:::nogap-->BalancerStrategyBase\\n PoolMixin:::nogap-->AuraStakingMixin:::nogap\\n BalancerStrategyBase:::hasgap-->BaseStrategyVault:::hasgap\\n BalancerStrategyBase:::hasgap-->UUPSUpgradeable\\n```\\n\\nThe Boosted3TokenAuraVault and MetaStable2TokenAuraVault vaults are meant to be upgradeable. However, it inherits contracts that are not upgrade-safe.\\nThe gap storage has been implemented on the `BaseStrategyVault` and `BalancerStrategyBase` contracts inherited by the Boosted3TokenAuraVault and MetaStable2TokenAuraVault vaults.\\n```\\nabstract contract BaseStrategyVault is Initializable, IStrategyVault {\\n using TokenUtils for IERC20;\\n using TradeHandler for Trade;\\n ..SNIP..\\n // Storage gap for future potential upgrades\\n uint256[45] private __gap;\\n}\\n```\\n\\n```\\nabstract contract BalancerStrategyBase is BaseStrategyVault, UUPSUpgradeable {\\n /** Immutables */\\n uint32 internal immutable SETTLEMENT_PERIOD_IN_SECONDS;\\n ..SNIP..\\n // Storage gap for future potential upgrades\\n uint256[100] private __gap;\\n}\\n```\\n\\nHowever, no gap storage is implemented on the `Boosted3TokenPoolMixin`, `MetaStable2TokenVaultMixin`, `TwoTokenPoolMixin`, `PoolMixin`, `AuraStakingMixin` and `BalancerOracleMixin` contracts inherited by the Boosted3TokenAuraVault and MetaStable2TokenAuraVault vaults.\\nThus, adding new storage variables to any of these inherited contracts can potentially overwrite the beginning of the storage layout of the child contract. causing critical misbehaviors in the system. | Consider defining an appropriate storage gap in each upgradeable parent contract at the end of all the storage variable definitions as follows:\\n```\\nuint256[50] __gap; // gap to reserve storage in the contract for future variable additions\\n```\\n\\nReference\\nA similar issue was found in the past audit report: | Storage of Boosted3TokenAuraVault and MetaStable2TokenAuraVault vaults might be corrupted during upgrading, thus causing the vaults to be broken and assets to be stuck. | ```\\ngraph BT;\\n classDef nogap fill:#f96;\\n classDef hasgap fill:#99cc00;\\n MetaStable2TokenAuraVault-->MetaStable2TokenVaultMixin:::nogap\\n MetaStable2TokenVaultMixin:::nogap-->TwoTokenPoolMixin:::nogap\\n MetaStable2TokenVaultMixin:::nogap-->BalancerOracleMixin:::nogap\\n TwoTokenPoolMixin:::nogap-->PoolMixin:::nogap\\n PoolMixin:::nogap-->AuraStakingMixin:::nogap\\n PoolMixin:::nogap-->BalancerStrategyBase;\\n BalancerStrategyBase:::hasgap-->BaseStrategyVault:::hasgap\\n BalancerStrategyBase:::hasgap-->UUPSUpgradeable\\n```\\n |
Did Not Approve To Zero First | medium | Allowance was not set to zero first before changing the allowance.\\nSome ERC20 tokens (like USDT) do not work when changing the allowance from an existing non-zero allowance value. For example Tether (USDT)'s `approve()` function will revert if the current approval is not zero, to protect against front-running changes of approvals.\\nThe following attempt to call the `approve()` function without setting the allowance to zero first.\\n```\\nFile: TokenUtils.sol\\n function checkApprove(IERC20 token, address spender, uint256 amount) internal {\\n if (address(token) == address(0)) return;\\n\\n IEIP20NonStandard(address(token)).approve(spender, amount);\\n _checkReturnCode();\\n }\\n```\\n\\nHowever, if the token involved is an ERC20 token that does not work when changing the allowance from an existing non-zero allowance value, it will break a number of key functions or features of the protocol as the `TokenUtils.checkApprove` function is utilised extensively within the vault as shown below.\\n```\\nFile: TwoTokenPoolUtils.sol\\n function _approveBalancerTokens(TwoTokenPoolContext memory poolContext, address bptSpender) internal {\\n IERC20(poolContext.primaryToken).checkApprove(address(Deployments.BALANCER_VAULT), type(uint256).max);\\n IERC20(poolContext.secondaryToken).checkApprove(address(Deployments.BALANCER_VAULT), type(uint256).max);\\n // Allow BPT spender to pull BALANCER_POOL_TOKEN\\n IERC20(address(poolContext.basePool.pool)).checkApprove(bptSpender, type(uint256).max);\\n }\\n```\\n\\n```\\nFile: Boosted3TokenPoolUtils.sol\\n function _approveBalancerTokens(ThreeTokenPoolContext memory poolContext, address bptSpender) internal {\\n poolContext.basePool._approveBalancerTokens(bptSpender);\\n\\n IERC20(poolContext.tertiaryToken).checkApprove(address(Deployments.BALANCER_VAULT), type(uint256).max);\\n\\n // For boosted pools, the tokens inside pool context are AaveLinearPool tokens.\\n // So, we need to approve the _underlyingToken (primary borrow currency) for trading.\\n IBoostedPool underlyingPool = IBoostedPool(poolContext.basePool.primaryToken);\\n address primaryUnderlyingAddress = BalancerUtils.getTokenAddress(underlyingPool.getMainToken());\\n IERC20(primaryUnderlyingAddress).checkApprove(address(Deployments.BALANCER_VAULT), type(uint256).max);\\n }\\n```\\n\\n```\\nFile: TradingUtils.sol\\n /// @notice Approve exchange to pull from this contract\\n /// @dev approve up to trade.amount for EXACT_IN trades and up to trade.limit\\n /// for EXACT_OUT trades\\n function _approve(Trade memory trade, address spender) private {\\n uint256 allowance = _isExactIn(trade) ? trade.amount : trade.limit;\\n IERC20(trade.sellToken).checkApprove(spender, allowance);\\n }\\n```\\n\\n```\\nFile: StrategyUtils.sol\\n IERC20(buyToken).checkApprove(address(Deployments.WRAPPED_STETH), amountBought);\\n uint256 wrappedAmount = Deployments.WRAPPED_STETH.balanceOf(address(this));\\n /// @notice the amount returned by wrap is not always accurate for some reason\\n Deployments.WRAPPED_STETH.wrap(amountBought);\\n amountBought = Deployments.WRAPPED_STETH.balanceOf(address(this)) - wrappedAmount;\\n```\\n | It is recommended to set the allowance to zero before increasing the allowance and use safeApprove/safeIncreaseAllowance. | A number of features within the vaults will not work if the `approve` function reverts. | ```\\nFile: TokenUtils.sol\\n function checkApprove(IERC20 token, address spender, uint256 amount) internal {\\n if (address(token) == address(0)) return;\\n\\n IEIP20NonStandard(address(token)).approve(spender, amount);\\n _checkReturnCode();\\n }\\n```\\n |
`deleverageAccount` can be used by an address to enter a vault that would otherwise be restricted by the `requireValidAccount` check in `enterVault` | medium | `deleverageAccount` can be used by an address to enter a vault that would otherwise be restricted by the `requireValidAccount` check in `enterVault`\\nWhen `enterVault` in `VaultAccountAction.sol` is called, the first function that is called is `requireValidAccount`. This function checks to ensure that the passed-in `account` parameter is not a system-level `account` address:\\n```\\nrequire(account != Constants.RESERVE); // Reserve address is address(0)\\nrequire(account != address(this));\\n(\\n uint256 isNToken,\\n /* incentiveAnnualEmissionRate */,\\n /* lastInitializedTime */,\\n /* assetArrayLength */,\\n /* parameters */\\n) = nTokenHandler.getNTokenContext(account);\\nrequire(isNToken == 0);\\n```\\n\\nWith the above checks, `requireValidAccount` ensures that any Notional system-level account cannot enter a vault. However, `deleverageAccount` in `VaultAccountAction.sol` allows liquidators to transfer vault shares from a liquidated account into their own account. In the case that a liquidator is not already entered into a vault, then `deleverageAccount` will instantiate a vault account for them (using _transferLiquidatorProfits) before depositing the liquidated account's vault shares into the newly-instantiated account. This effectively circumvents the `requireValidAccount` check in `enterVault`. | Consider updating the `require` statement in `_transferLiquidatorProfits` to the following:\\n```\\nrequire(liquidator.maturity == maturity, "Vault Shares Mismatch"); // dev: has vault shares\\n```\\n\\nRemoving the option of allowing addresses that do not have a maturity in the respective vault to receive shares and therefore implicitly enter a vault prevents Notional system accounts from being able to enter into vaults. | Any address that would otherwise be restricted from entering vaults via the `requireValidAccount` check would be able to circumvent that function using `deleverageAccount`. I assume these system-level accounts are restricted from entering vaults as they have access to internal Notional state and are used across the protocol, so having them be able to enter vaults could negatively impact Notional.\\nAssuming that all the relevant Notional system accounts are smart contracts that do not allow arbitrary calls, then having any of the system accounts themselves trigger this issue is infeasible. However, as a result of another issue it is possible for a vault to force an arbitrary address to deleverage accounts, which could be used to force a Notional system account to enter into a vault. | ```\\nrequire(account != Constants.RESERVE); // Reserve address is address(0)\\nrequire(account != address(this));\\n(\\n uint256 isNToken,\\n /* incentiveAnnualEmissionRate */,\\n /* lastInitializedTime */,\\n /* assetArrayLength */,\\n /* parameters */\\n) = nTokenHandler.getNTokenContext(account);\\nrequire(isNToken == 0);\\n```\\n |
No Validation Check Against Decimal Of Secondary Token | medium | There is no validation check against the decimal of the secondary token due to a typo. Thus, this will cause the vault to be broken entirely or the value of the shares to be stuck if a secondary token with more than 18 decimals is added.\\nThere is a typo in Line 65 within the `TwoTokenPoolMixin` contract. The validation at Line 65 should perform a check against the `secondaryDecimals` instead of the `primaryDecimals`. As such, no validation was performed against the secondary token.\\n```\\nFile: TwoTokenPoolMixin.sol\\n constructor(\\n NotionalProxy notional_, \\n AuraVaultDeploymentParams memory params\\n ) PoolMixin(notional_, params) {\\n..SNIP..\\n // If the underlying is ETH, primaryBorrowToken will be rewritten as WETH\\n uint256 primaryDecimals = IERC20(primaryAddress).decimals();\\n // Do not allow decimal places greater than 18\\n require(primaryDecimals <= 18);\\n PRIMARY_DECIMALS = uint8(primaryDecimals);\\n\\n uint256 secondaryDecimals = address(SECONDARY_TOKEN) ==\\n Deployments.ETH_ADDRESS\\n ? 18\\n : SECONDARY_TOKEN.decimals();\\n require(primaryDecimals <= 18);\\n SECONDARY_DECIMALS = uint8(secondaryDecimals);\\n }\\n```\\n\\nIf the decimal of the secondary tokens is more than 18, the `Stable2TokenOracleMath._getSpotPrice` will stop working as the code will revert in Line 24 below because the decimal of secondary tokens is more than 18.\\nWhen the `Stable2TokenOracleMath._getSpotPrice` function stop working, the vaults will be broken entirely because the settle vault and reinvest rewards functions will stop working too. This is because the settle vault and reinvest rewards functions will call the `Stable2TokenOracleMath._getSpotPrice` function internally, resulting in a revert.\\n```\\nFile: Stable2TokenOracleMath.sol\\n function _getSpotPrice(\\n StableOracleContext memory oracleContext, \\n TwoTokenPoolContext memory poolContext, \\n uint256 tokenIndex\\n ) internal view returns (uint256 spotPrice) {\\n // Prevents overflows, we don't expect tokens to be greater than 18 decimals, don't use\\n // equal sign for minor gas optimization\\n require(poolContext.primaryDecimals < 19); /// @dev primaryDecimals overflow\\n require(poolContext.secondaryDecimals < 19); /// @dev secondaryDecimals overflow\\n require(tokenIndex < 2); /// @dev invalid token index\\n```\\n | Update the code to perform the validation against the `secondaryDecimals` state variable.\\n```\\nconstructor(\\n NotionalProxy notional_, \\n AuraVaultDeploymentParams memory params\\n) PoolMixin(notional_, params) {\\n ..SNIP..\\n // If the underlying is ETH, primaryBorrowToken will be rewritten as WETH\\n uint256 primaryDecimals = IERC20(primaryAddress).decimals();\\n // Do not allow decimal places greater than 18\\n require(primaryDecimals <= 18);\\n PRIMARY_DECIMALS = uint8(primaryDecimals);\\n\\n uint256 secondaryDecimals = address(SECONDARY_TOKEN) ==\\n Deployments.ETH_ADDRESS\\n ? 18\\n : SECONDARY_TOKEN.decimals();\\n// Remove the line below\\n require(primaryDecimals <= 18);\\n// Add the line below\\n require(secondaryDecimals <= 18);\\n SECONDARY_DECIMALS = uint8(secondaryDecimals);\\n}\\n```\\n | The `Stable2TokenOracleMath._getSpotPrice` will stop working, which will in turn cause the settle vault and reinvest rewards functions to stop working too. Since a vault cannot be settled, the vault is considered broken. If the reinvest rewards function cannot work, the value of users' shares will be stuck as the vault relies on reinvesting rewards to buy more BPT tokens from the market.\\nIn addition, there might be some issues when calculating the price of the tokens since the vault assumes that both primary and secondary tokens have a decimal equal to or less than 18 OR some overflow might occur when processing the token value. | ```\\nFile: TwoTokenPoolMixin.sol\\n constructor(\\n NotionalProxy notional_, \\n AuraVaultDeploymentParams memory params\\n ) PoolMixin(notional_, params) {\\n..SNIP..\\n // If the underlying is ETH, primaryBorrowToken will be rewritten as WETH\\n uint256 primaryDecimals = IERC20(primaryAddress).decimals();\\n // Do not allow decimal places greater than 18\\n require(primaryDecimals <= 18);\\n PRIMARY_DECIMALS = uint8(primaryDecimals);\\n\\n uint256 secondaryDecimals = address(SECONDARY_TOKEN) ==\\n Deployments.ETH_ADDRESS\\n ? 18\\n : SECONDARY_TOKEN.decimals();\\n require(primaryDecimals <= 18);\\n SECONDARY_DECIMALS = uint8(secondaryDecimals);\\n }\\n```\\n |
Vault Share/Strategy Token Calculation Can Be Broken By First User/Attacker | medium | A well-known attack vector for almost all shares-based liquidity pool contracts, where an early user can manipulate the price per share and profit from late users' deposits because of the precision loss caused by the rather large value of price per share.\\nNote: This issue affects MetaStable2 and Boosted3 balancer leverage vaults\\nFor simplicity's sake, we will simplify the `strategy token` minting formula as follows. Also, assume that the 1 `vault share` is equivalent to 1 `strategy token` for this particular strategy vault, therefore, we will use the term `vault share` and `strategy token` interchangeably here.\\n```\\nstrategyToken = (totalBPTHeld == 0) ? bptClaim : (bptClaim * totalStrategyToken) / totalBPTHeld\\n```\\n\\nThe vault minting formula is taken from the following:\\n```\\nFile: StrategyUtils.sol\\n /// @notice Converts BPT to strategy tokens\\n function _convertBPTClaimToStrategyTokens(StrategyContext memory context, uint256 bptClaim)\\n internal pure returns (uint256 strategyTokenAmount) {\\n if (context.totalBPTHeld == 0) {\\n // Strategy tokens are in 8 decimal precision, BPT is in 18. Scale the minted amount down.\\n return (bptClaim * uint256(Constants.INTERNAL_TOKEN_PRECISION)) / \\n BalancerConstants.BALANCER_PRECISION;\\n }\\n\\n // BPT held in maturity is calculated before the new BPT tokens are minted, so this calculation\\n // is the tokens minted that will give the account a corresponding share of the new bpt balance held.\\n // The precision here will be the same as strategy token supply.\\n strategyTokenAmount = (bptClaim * context.vaultState.totalStrategyTokenGlobal) / context.totalBPTHeld;\\n }\\n```\\n\\nIf the attacker who is the first depositor claims 1 BPT, he will receive 1 Strategy Token. So 1 BPT per Strategy Token. At this point in time, `totalBPTHeld = 1` and `totalStrategyToken = 1`.\\nThe attacker obtains 9999 BPT can be obtained from the open market. He proceeds to deposit the 9999 BPT into the Aura reward pool on behalf of the vault. At this point in time, `totalBPTHeld = 10000` and `totalStrategyToken = 1`. So 10000 BPT per Strategy Token. Refer to the "How to increase the total BPT held?" section below for more details.\\nTwo issues can occur from here.\\nIssue 1 - If bptClaim >= totalBPTHeld\\nThe following describes a scenario in which a user's assets are lost and stolen by an attacker. Assume that Alice deposits/borrow some assets and received 19999 BPT. Based on the formula, Alice will only receive 1 Strategy Token. She immediately loses 9999 BPT or half of her assets if she exits the vault or redeems the strategy tokens right after the deposit.\\n```\\nstrategyToken = (bptClaim * totalStrategyToken) / totalBPTHeld\\nstrategyToken = (19999 * 1) / 10000 = 1\\n```\\n\\nIf the attacker exits the vault right after Alice's deposit, the attacker will receive 14999 BPT. He profited 4999 BPT from this attack\\n```\\nbptReceived = (strategyToken * totalBPTHeld) / totalStrategyToken\\nbptReceived = (1 * 29999) / 2 = 14999\\n```\\n\\nIssue 2 - If bptClaim < totalBPTHeld\\nThe following describes a scenario in which a user's assets are lost entirely. Assume that Alice deposits/borrow some assets and received 9999 BPT\\n```\\nstrategyToken = (bptClaim * totalStrategyToken) / totalBPTHeld\\nstrategyToken = (9999 * 1) / 10000 = 0\\n```\\n\\nAs such, she deposited 9999 BPT but did not receive any strategy tokens in return.\\nHow to increase the total BPT held?\\nUnlike the vault design seen in other protocols, Notional's leverage vault does not compute the total BPT held by the vault directly via `BTP.balanceOf(address(vault))`. The vault deposit its BPT to the Aura Reward Pool. Therefore, it is not possible to increase the total BPT held by the vault simply by performing a direct BPT token transfer to the vault or Aura Reward Pool in an attempt to increase it.\\nHowever, there is a workaround to increase the total BPT held by the vault, and this can be executed by anyone.\\nThe `totalBPTHeld` within the vault is obtained by calling the `PoolMixin._bptHeld` function.\\n```\\nFile: PoolMixin.sol\\n function _baseStrategyContext() internal view returns(StrategyContext memory) {\\n return StrategyContext({\\n totalBPTHeld: _bptHeld(),\\n settlementPeriodInSeconds: SETTLEMENT_PERIOD_IN_SECONDS,\\n tradingModule: TRADING_MODULE,\\n vaultSettings: BalancerVaultStorage.getStrategyVaultSettings(),\\n vaultState: BalancerVaultStorage.getStrategyVaultState(),\\n feeReceiver: FEE_RECEIVER\\n });\\n }\\n```\\n\\nWithin the `PoolMixin._bptHeld` function, it will call the `AURA_REWARD_POOL.balanceOf(address(this))` to retrieve the number of BPT that the vault has deposited into the Aura Reward Pool.\\n```\\nFile: PoolMixin.sol\\n /// @dev Gets the total BPT held by the aura reward pool\\n function _bptHeld() internal view returns (uint256) {\\n return AURA_REWARD_POOL.balanceOf(address(this));\\n }\\n```\\n\\nThe following is the contract of the AURA_REWARD_POOL taken from the Etherscan. Note that the `AURA_REWARD_POOL.balanceOf` will retrieve the number of BPT tokens held by an account. In this example, the account will be the vault's address.\\n```\\nFile: BaseRewardPool4626.sol\\n/**\\n * @dev Returns the amount of tokens owned by `account`.\\n */\\nfunction balanceOf(address account) public view override(BaseRewardPool, IERC20) returns (uint256) {\\n return BaseRewardPool.balanceOf(account);\\n}\\n```\\n\\n```\\nFile: BaseRewardPool.sol\\nfunction balanceOf(address account) public view virtual returns (uint256) {\\n return _balances[account];\\n}\\n```\\n\\nTo increase the balance, the `deposit(uint256 _pid, uint256 _amount, bool _stake)` function of Aura's Booster contract can be called. However, the problem is that this function will deposit to the `msg.sender` and there is no way to spoof the vault's address. Thus, using this function will not work.\\nHowever, there is a second method that can be used to perform a deposit. The `AURA_REWARD_POOL` point to the `BaseRewardPool4626`, thus the reward pool is an ERC4626 vault. The Aura's ERC4626 vault supports an alternative deposit function called `BaseRewardPool4626.deposit` that allows anyone to deposit on behalf of another account. An attacker can leverage the `BaseRewardPool4626.deposit` function by specifying the `receiver` parameter to be the `vault.address` in an attempt to increase the total BPT tokens held by the vault.\\n```\\nFile: BaseRewardPool4626.sol\\n/**\\n * @notice Mints `shares` Vault shares to `receiver`.\\n * @dev Because `asset` is not actually what is collected here, first wrap to required token in the booster.\\n */\\nfunction deposit(uint256 assets, address receiver) public virtual override nonReentrant returns (uint256) {\\n // Transfer "asset" (crvLP) from sender\\n IERC20(asset).safeTransferFrom(msg.sender, address(this), assets);\\n\\n // Convert crvLP to cvxLP through normal booster deposit process, but don't stake\\n uint256 balBefore = stakingToken.balanceOf(address(this));\\n IDeposit(operator).deposit(pid, assets, false);\\n uint256 balAfter = stakingToken.balanceOf(address(this));\\n\\n require(balAfter.sub(balBefore) >= assets, "!deposit");\\n\\n // Perform stake manually, now that the funds have been received\\n _processStake(assets, receiver);\\n\\n emit Deposit(msg.sender, receiver, assets, assets);\\n emit Staked(receiver, assets);\\n return assets;\\n}\\n```\\n\\n```\\nFile: BaseRewardPool.sol \\n/**\\n* @dev Generic internal staking function that basically does 3 things: update rewards based\\n* on previous balance, trigger also on any child contracts, then update balances.\\n* @param _amount Units to add to the users balance\\n* @param _receiver Address of user who will receive the stake\\n*/\\nfunction _processStake(uint256 _amount, address _receiver) internal updateReward(_receiver) {\\n require(_amount > 0, 'RewardPool : Cannot stake 0');\\n\\n //also stake to linked rewards\\n for(uint i=0; i < extraRewards.length; i++){\\n IRewards(extraRewards[i]).stake(_receiver, _amount);\\n }\\n\\n _totalSupply = _totalSupply.add(_amount);\\n _balances[_receiver] = _balances[_receiver].add(_amount);\\n}\\n```\\n | Consider requiring a minimal amount of strategy tokens to be minted for the first minter, and send a portion of the initial mints as a reserve to the Notional Treasury so that the pricePerShare/pricePerStrategyToken can be more resistant to manipulation.\\nReference\\nA similar issue was found in a past Sherlock audit | The attacker can profit from future users' deposits while the late users will lose part of their funds to the attacker. Additionally, it is also possible for users to get no share in return for their deposited funds. | ```\\nstrategyToken = (totalBPTHeld == 0) ? bptClaim : (bptClaim * totalStrategyToken) / totalBPTHeld\\n```\\n |
UniV2Adapter#getExecutionData doesn't properly handle native ETH swaps | medium | UniV2Adapter#getExecutionData doesn't properly account for native ETH trades which makes them impossible. Neither method selected supports direct ETH trades, and sender/target are not set correctly for TradingUtils_executeTrade to automatically convert\\n```\\nspender = address(Deployments.UNIV2_ROUTER);\\ntarget = address(Deployments.UNIV2_ROUTER);\\n// msgValue is always zero for uniswap\\n\\nif (\\n tradeType == TradeType.EXACT_IN_SINGLE ||\\n tradeType == TradeType.EXACT_IN_BATCH\\n) {\\n executionCallData = abi.encodeWithSelector(\\n IUniV2Router2.swapExactTokensForTokens.selector,\\n trade.amount,\\n trade.limit,\\n data.path,\\n from,\\n trade.deadline\\n );\\n} else if (\\n tradeType == TradeType.EXACT_OUT_SINGLE ||\\n tradeType == TradeType.EXACT_OUT_BATCH\\n) {\\n executionCallData = abi.encodeWithSelector(\\n IUniV2Router2.swapTokensForExactTokens.selector,\\n trade.amount,\\n trade.limit,\\n data.path,\\n from,\\n trade.deadline\\n );\\n}\\n```\\n\\nUniV2Adapter#getExecutionData either returns the swapTokensForExactTokens or swapExactTokensForTokens, neither of with support native ETH. It also doesn't set spender and target like UniV3Adapter, so _executeTrade won't automatically convert it to a WETH call. The result is that all Uniswap V2 calls made with native ETH will fail. Given that Notional operates in native ETH rather than WETH, this is an important feature that currently does not function. | There are two possible solutions:\\nChange the way that target and sender are set to match the implementation in UniV3Adapter\\nModify the return data to return the correct selector for each case (swapExactETHForTokens, swapTokensForExactETH, etc.)\\nGiven that the infrastructure for Uniswap V3 already exists in TradingUtils_executeTrade the first option would be the easiest, and would give the same results considering it's basically the same as what the router is doing internally anyways. | Uniswap V2 calls won't support native ETH | ```\\nspender = address(Deployments.UNIV2_ROUTER);\\ntarget = address(Deployments.UNIV2_ROUTER);\\n// msgValue is always zero for uniswap\\n\\nif (\\n tradeType == TradeType.EXACT_IN_SINGLE ||\\n tradeType == TradeType.EXACT_IN_BATCH\\n) {\\n executionCallData = abi.encodeWithSelector(\\n IUniV2Router2.swapExactTokensForTokens.selector,\\n trade.amount,\\n trade.limit,\\n data.path,\\n from,\\n trade.deadline\\n );\\n} else if (\\n tradeType == TradeType.EXACT_OUT_SINGLE ||\\n tradeType == TradeType.EXACT_OUT_BATCH\\n) {\\n executionCallData = abi.encodeWithSelector(\\n IUniV2Router2.swapTokensForExactTokens.selector,\\n trade.amount,\\n trade.limit,\\n data.path,\\n from,\\n trade.deadline\\n );\\n}\\n```\\n |
Deployments.sol uses the wrong address for UNIV2 router which causes all Uniswap V2 calls to fail | medium | Deployments.sol accidentally uses the Uniswap V3 router address for UNIV2_ROUTER which causes all Uniswap V2 calls to fail\\n```\\nIUniV2Router2 internal constant UNIV2_ROUTER = IUniV2Router2(0xE592427A0AEce92De3Edee1F18E0157C05861564);\\n```\\n\\nThe constant UNIV2_ROUTER contains the address for the Uniswap V3 router, which doesn't contain the "swapExactTokensForTokens" or "swapTokensForExactTokens" methods. As a result, all calls made to Uniswap V2 will revert. | Change UNIV2_ROUTER to the address of the V2 router:\\n```\\nIUniV2Router2 internal constant UNIV2_ROUTER = IUniV2Router2(0x7a250d5630B4cF539739dF2C5dAcb4c659F2488D);\\n```\\n | Uniswap V2 is totally unusable | ```\\nIUniV2Router2 internal constant UNIV2_ROUTER = IUniV2Router2(0xE592427A0AEce92De3Edee1F18E0157C05861564);\\n```\\n |
stakingContext.auraRewardPool.withdrawAndUnwrap boolean return value not handled in Boosted3TokenPoolUtils.sol and TwoTokenPoolUtils.sol | medium | stakingContext.auraRewardPool.withdrawAndUnwrap boolean return value not handled in Boosted3TokenPoolUtils.sol and TwoTokenPoolUtils.sol\\nWhen calling function _unstakeAndExitPool,\\nthe contract withdraw BPT tokens back to the vault for redemption\\nby calling\\n```\\nstakingContext.auraRewardPool.withdrawAndUnwrap(bptClaim, false);\\n```\\n\\nhowever, the underlying call withdrawAndUnwrap returns boolean value, the contract does not handle the return value.\\nThe see the interface of the IAuraRewardPool already indicate that the underlying call returns value\\n```\\ninterface IAuraRewardPool {\\n function withdrawAndUnwrap(uint256 amount, bool claim) external returns(bool);\\n```\\n\\nand the underlying call with BaseRewardConvexPool.sol also returns the boolean\\n```\\n function withdrawAndUnwrap(uint256 amount, bool claim) public updateReward(msg.sender) returns(bool){\\n```\\n | We recommend the project handle the return value when unstaking explicitly\\n```\\nbool unstaked = stakingContext.auraRewardPool.withdrawAndUnwrap(bptClaim, false);\\nrequire(unstaked, 'unstake failed');\\n```\\n | Because there are stacks of external call:\\nNotional -> auraRewardPool -> BaseRewardPool,\\nwithout handling the return value explicitly, the transaction may risk fails silently. | ```\\nstakingContext.auraRewardPool.withdrawAndUnwrap(bptClaim, false);\\n```\\n |
stakingContext.auraBooster.deposit boolean return value not handled in Boosted3TokenPoolUtils.sol | medium | stakingContext.auraBooster.deposit boolean return value not handled in Boosted3TokenPoolUtils.sol\\nthe function _joinPoolAndStake in Boosted3TokenPoolUtils.sol is used extensively when handling the token stake.\\nHowever, when entering the stake and interacting with external contract, the logic does not handle the returned boolean value in the code below\\n```\\n // Transfer token to Aura protocol for boosted staking\\n stakingContext.auraBooster.deposit(stakingContext.auraPoolId, bptMinted, true); // stake = true\\n```\\n\\nIn the AuraBooster implmenetation, a Boolean is indeed returned to acknowledge that deposit is completely successfully.\\n```\\n /**\\n * @notice Deposits an "_amount" to a given gauge (specified by _pid), mints a `DepositToken`\\n * and subsequently stakes that on Convex BaseRewardPool\\n */\\n function deposit(uint256 _pid, uint256 _amount, bool _stake) public returns(bool){\\n```\\n | We recommend the project handle the stakingContext.auraBooster.deposit boolean return value explicitly.\\n```\\n // Transfer token to Aura protocol for boosted staking\\n bool staked = stakingContext.auraBooster.deposit(stakingContext.auraPoolId, bptMinted, true); // stake = true\\n require(staked, 'stake failed');\\n```\\n | Notional -> AuraBooster -> BaseRewardPool\\nWithout handling the boolean value explitily, there is risk that transaction may be fail sliently.\\nBecause there are two layers of external call | ```\\n // Transfer token to Aura protocol for boosted staking\\n stakingContext.auraBooster.deposit(stakingContext.auraPoolId, bptMinted, true); // stake = true\\n```\\n |
`CrossCurrencyfCashVault` Cannot Settle Its Assets In Pieces | medium | The `CrossCurrencyfCashVault` vault cannot settle its assets in pieces. Thus, it might cause the vault to incur unnecessary slippage.\\nThe settle vault function is designed in a manner where its assets can be settled in pieces. Therefore, the `settleVault` function accepts a `strategyTokens` or `strategyTokensToRedeem` parameter to allow the caller to specify the number of strategy tokens to be settled.\\nThe reason as mentioned in Notional's walkthrough video (Refer to the explanation at 15.50min mark) is that in some cases the caller might want to break down into multiple transactions due to massive slippage.\\nFor instance, the vault might utilize a 2 day settlement period to allow the vault to settle its assets in pieces so that it can avoid unnecessary transaction costs associated with converting all its assets back to USDC in a single transaction.\\n```\\nFile: CrossCurrencyfCashVault.sol\\n /**\\n * @notice During settlement all of the fCash balance in the lend currency will be redeemed to the\\n * underlying token and traded back to the borrow currency. All of the borrow currency will be deposited\\n * into the Notional contract as asset tokens and held for accounts to withdraw. Settlement can only\\n * be called after maturity.\\n * @param maturity the maturity to settle\\n * @param settlementTrade details for the settlement trade\\n */\\n function settleVault(uint256 maturity, uint256 strategyTokens, bytes calldata settlementTrade) external {\\n require(maturity <= block.timestamp, "Cannot Settle");\\n VaultState memory vaultState = NOTIONAL.getVaultState(address(this), maturity);\\n require(vaultState.isSettled == false);\\n require(vaultState.totalStrategyTokens >= strategyTokens);\\n\\n RedeemParams memory params = abi.decode(settlementTrade, (RedeemParams));\\n \\n // The only way for underlying value to be negative would be if the vault has somehow ended up with a borrowing\\n // position in the lend underlying currency. This is explicitly prevented during redemption.\\n uint256 underlyingValue = convertStrategyToUnderlying(\\n address(0), vaultState.totalStrategyTokens, maturity\\n ).toUint();\\n\\n // Authenticate the minimum purchase amount, all tokens will be sold given this slippage limit.\\n uint256 minAllowedPurchaseAmount = (underlyingValue * settlementSlippageLimit) / SETTLEMENT_SLIPPAGE_PRECISION;\\n require(params.minPurchaseAmount >= minAllowedPurchaseAmount, "Purchase Limit");\\n\\n NOTIONAL.redeemStrategyTokensToCash(maturity, strategyTokens, settlementTrade);\\n\\n // If there are no more strategy tokens left, then mark the vault as settled\\n vaultState = NOTIONAL.getVaultState(address(this), maturity);\\n if (vaultState.totalStrategyTokens == 0) {\\n NOTIONAL.settleVault(address(this), maturity);\\n }\\n }\\n```\\n\\nDuring vault settlement, the `CrossCurrencyfCashVault._redeemFromNotional` function will be called, and the code in lines 252-262 will be executed. However, it was observed that the `strategyTokens` parameter is ignored, and the vault will forcefully settle all the strategy tokens in one go. As such, there is no way for the caller to break down the settle vault transaction into multiple transactions.\\n```\\nFile: CrossCurrencyfCashVault.sol\\n function _redeemFromNotional(\\n address account,\\n uint256 strategyTokens,\\n uint256 maturity,\\n bytes calldata data\\n ) internal override returns (uint256 borrowedCurrencyAmount) {\\n uint256 balanceBefore = LEND_UNDERLYING_TOKEN.balanceOf(address(this));\\n RedeemParams memory params = abi.decode(data, (RedeemParams));\\n\\n if (maturity <= block.timestamp) {\\n // Only allow the vault to redeem past maturity to settle all positions\\n require(account == address(this));\\n NOTIONAL.settleAccount(address(this));\\n (int256 cashBalance, /* */, /* */) = NOTIONAL.getAccountBalance(LEND_CURRENCY_ID, address(this));\\n\\n // It should never be possible that this contract has a negative cash balance\\n require(0 <= cashBalance && cashBalance <= int256(uint256(type(uint88).max)));\\n\\n // Withdraws all cash to underlying\\n NOTIONAL.withdraw(LEND_CURRENCY_ID, uint88(uint256(cashBalance)), true);\\n } else {\\n // Sells fCash on Notional AMM (via borrowing)\\n BalanceActionWithTrades[] memory action = _encodeBorrowTrade(\\n maturity,\\n strategyTokens,\\n params.maxBorrowRate\\n );\\n NOTIONAL.batchBalanceAndTradeAction(address(this), action);\\n\\n // Check that we have not somehow borrowed into a negative fCash position, vault borrows\\n // are not included in account context\\n AccountContext memory accountContext = NOTIONAL.getAccountContext(address(this));\\n require(accountContext.hasDebt == 0x00);\\n }\\n\\n uint256 balanceAfter = LEND_UNDERLYING_TOKEN.balanceOf(address(this));\\n \\n // Trade back to borrow currency for repayment\\n Trade memory trade = Trade({\\n tradeType: TradeType.EXACT_IN_SINGLE,\\n sellToken: address(LEND_UNDERLYING_TOKEN),\\n buyToken: address(_underlyingToken()),\\n amount: balanceAfter - balanceBefore,\\n limit: params.minPurchaseAmount,\\n deadline: block.timestamp,\\n exchangeData: params.exchangeData\\n });\\n\\n (/* */, borrowedCurrencyAmount) = _executeTrade(params.dexId, trade);\\n }\\n```\\n | It is recommended to update the `CrossCurrencyfCashVault._redeemFromNotional` function to allow the vault to be settled in multiple transactions. | The vault might incur unnecessary slippage during settlement as the settlement cannot be broken into multiple transactions. | ```\\nFile: CrossCurrencyfCashVault.sol\\n /**\\n * @notice During settlement all of the fCash balance in the lend currency will be redeemed to the\\n * underlying token and traded back to the borrow currency. All of the borrow currency will be deposited\\n * into the Notional contract as asset tokens and held for accounts to withdraw. Settlement can only\\n * be called after maturity.\\n * @param maturity the maturity to settle\\n * @param settlementTrade details for the settlement trade\\n */\\n function settleVault(uint256 maturity, uint256 strategyTokens, bytes calldata settlementTrade) external {\\n require(maturity <= block.timestamp, "Cannot Settle");\\n VaultState memory vaultState = NOTIONAL.getVaultState(address(this), maturity);\\n require(vaultState.isSettled == false);\\n require(vaultState.totalStrategyTokens >= strategyTokens);\\n\\n RedeemParams memory params = abi.decode(settlementTrade, (RedeemParams));\\n \\n // The only way for underlying value to be negative would be if the vault has somehow ended up with a borrowing\\n // position in the lend underlying currency. This is explicitly prevented during redemption.\\n uint256 underlyingValue = convertStrategyToUnderlying(\\n address(0), vaultState.totalStrategyTokens, maturity\\n ).toUint();\\n\\n // Authenticate the minimum purchase amount, all tokens will be sold given this slippage limit.\\n uint256 minAllowedPurchaseAmount = (underlyingValue * settlementSlippageLimit) / SETTLEMENT_SLIPPAGE_PRECISION;\\n require(params.minPurchaseAmount >= minAllowedPurchaseAmount, "Purchase Limit");\\n\\n NOTIONAL.redeemStrategyTokensToCash(maturity, strategyTokens, settlementTrade);\\n\\n // If there are no more strategy tokens left, then mark the vault as settled\\n vaultState = NOTIONAL.getVaultState(address(this), maturity);\\n if (vaultState.totalStrategyTokens == 0) {\\n NOTIONAL.settleVault(address(this), maturity);\\n }\\n }\\n```\\n |
`CrossCurrencyfCashVault` Cannot Be Upgraded | medium | `CrossCurrencyfCashVault` cannot be upgraded as it is missing the authorize upgrade method.\\nThe Cross Currency Vault is expected to be upgradeable as:\\nThis vault is similar to the other vaults (Boosted3TokenAuraVault and MetaStable2TokenAuraVault) provided by Notional that are upgradeable by default.\\nThe `BaseStrategyVault` has configured the storage gaps `uint256[45] private __gap` for upgrading purposes\\nClarified with the sponsor and noted that Cross Currency Vault should be upgradeable\\n`CrossCurrencyfCashVault` inherits from `BaseStrategyVault`. However, the `BaseStrategyVault` forget to inherit Openzepplin's `UUPSUpgradeable` contract. Therefore, it is missing the authorize upgrade method, and the contract cannot be upgraded.\\n```\\nabstract contract BaseStrategyVault is Initializable, IStrategyVault {\\n using TokenUtils for IERC20;\\n using TradeHandler for Trade;\\n\\n /// @notice Hardcoded on the implementation contract during deployment\\n NotionalProxy public immutable NOTIONAL;\\n ITradingModule public immutable TRADING_MODULE;\\n uint8 constant internal INTERNAL_TOKEN_DECIMALS = 8;\\n \\n ..SNIP..\\n \\n // Storage gap for future potential upgrades\\n uint256[45] private __gap;\\n }\\n```\\n\\n```\\ncontract CrossCurrencyfCashVault is BaseStrategyVault {\\n using TypeConvert for uint256;\\n using TypeConvert for int256;\\n\\n uint256 public constant SETTLEMENT_SLIPPAGE_PRECISION = 1e18;\\n\\n struct DepositParams {\\n // Minimum purchase amount of the lend underlying token, this is\\n // based on the deposit + borrowed amount and must be set to a non-zero\\n // value to establish a slippage limit.\\n uint256 minPurchaseAmount;\\n // Minimum annualized lending rate, can be set to zero for no slippage limit\\n uint32 minLendRate;\\n // ID of the desired DEX to trade on, _depositFromNotional will always trade\\n // using an EXACT_IN_SINGLE trade which is supported by all DEXes\\n uint16 dexId;\\n // Exchange data depending on the selected dexId\\n ..SNIP..\\n```\\n | It is recommended to Inherit Openzepplin's `UUPSUpgradeable` contract and implement the missing authorize upgrade method.\\n```\\n// Remove the line below\\n abstract contract BaseStrategyVault is Initializable, IStrategyVault {\\n// Add the line below\\n abstract contract BaseStrategyVault is Initializable, IStrategyVault, UUPSUpgradeable {\\n using TokenUtils for IERC20;\\n using TradeHandler for Trade;\\n\\n /// @notice Hardcoded on the implementation contract during deployment\\n NotionalProxy public immutable NOTIONAL;\\n ITradingModule public immutable TRADING_MODULE;\\n uint8 constant internal INTERNAL_TOKEN_DECIMALS = 8;\\n \\n ..SNIP..\\n \\n// Add the line below\\n function _authorizeUpgrade(\\n// Add the line below\\n address /* newImplementation */\\n// Add the line below\\n ) internal override onlyNotionalOwner {} \\n \\n // Storage gap for future potential upgrades\\n uint256[45] private __gap;\\n }\\n```\\n | If a critical bug is discovered within the Cross Currency Vault after launching that causes a loss of assets, the vault cannot be upgraded unlike the other balancer-related vaults to fix the bugs. All assets within the vault will be lost | ```\\nabstract contract BaseStrategyVault is Initializable, IStrategyVault {\\n using TokenUtils for IERC20;\\n using TradeHandler for Trade;\\n\\n /// @notice Hardcoded on the implementation contract during deployment\\n NotionalProxy public immutable NOTIONAL;\\n ITradingModule public immutable TRADING_MODULE;\\n uint8 constant internal INTERNAL_TOKEN_DECIMALS = 8;\\n \\n ..SNIP..\\n \\n // Storage gap for future potential upgrades\\n uint256[45] private __gap;\\n }\\n```\\n |
getGetAmplificationParameter() precision is not used, which result in accounting issue in MetaStable2TokenAuraHelper.sol and in Boosted3TokenAuraHelper.sol | medium | getGetAmplificationParameter() precision is not used, which result in accounting issue in MetaStable2TokenAuraHelper.sol and in Boosted3TokenAuraHelper.sol\\nThis report has two part,\\npart one trace the accounting issue in MetaStable2TokenAuraHelper.sol,\\npart two trace the accounting issue in Boosted3TokenAuraHelper.sol,\\nboth issue rooted in not handling the getGetAmplificationParameter() precision\\nAccording to the Balancer documentation\\npool.getGetAmplificationParameter()\\nreturns something resembling\\nvalue : 620000 isUpdating : False precision : 1000\\nwhere the amplification parameter is 620000 / 1000 = 620\\nbut in the code, the isUpdating and precision returned is ignored and not used.\\nPart One\\nLet's trace the function reinvestReward in MetaStable2TokenAuraHelper.sol\\n```\\n function reinvestReward(\\n MetaStable2TokenAuraStrategyContext calldata context,\\n ReinvestRewardParams calldata params\\n )\\n```\\n\\nIt calls\\n```\\n// Make sure we are joining with the right proportion to minimize slippage\\n oracleContext._validateSpotPriceAndPairPrice({\\n poolContext: poolContext,\\n strategyContext: strategyContext,\\n primaryAmount: primaryAmount,\\n secondaryAmount: secondaryAmount\\n });\\n```\\n\\nthen it calls\\n```\\nuint256 spotPrice = _getSpotPrice(oracleContext, poolContext, 0);\\n```\\n\\nthen it calls\\nInsite the function\\n```\\n (uint256 balanceX, uint256 balanceY) = tokenIndex == 0 ?\\n (poolContext.primaryBalance, poolContext.secondaryBalance) :\\n (poolContext.secondaryBalance, poolContext.primaryBalance);\\n\\n uint256 invariant = StableMath._calculateInvariant(\\n oracleContext.ampParam, StableMath._balances(balanceX, balanceY), true // round up\\n );\\n\\n spotPrice = StableMath._calcSpotPrice({\\n amplificationParameter: oracleContext.ampParam,\\n invariant: invariant,\\n balanceX: balanceX,\\n balanceY: balanceY\\n });\\n```\\n\\nWhat's wrong with this, I believe the precision has issue for ampParam\\nBecause When we get the oracleContext.ampParam from MetaStable2TokenVaultMixin.sol\\nWe did not use the precision returned from the pool\\n```\\n (\\n uint256 value,\\n /* bool isUpdating */,\\n /* uint256 precision */\\n ) = IMetaStablePool(address(BALANCER_POOL_TOKEN)).getAmplificationParameter();\\n```\\n\\nAccording to the Balancer documentation\\npool.getGetAmplificationParameter()\\nreturns something resembling\\nvalue : 620000 isUpdating : False precision : 1000\\nwhere the amplification parameter is 620000 / 1000 = 620\\nThe formula that calculate the spot price is\\n```\\n /**************************************************************************************************************\\n // //\\n // 2.a.x.y + a.y^2 + b.y //\\n // spot price Y/X = - dx/dy = ----------------------- //\\n // 2.a.x.y + a.x^2 + b.x //\\n // //\\n // n = 2 //\\n // a = amp param * n //\\n // b = D + a.(S - D) //\\n // D = invariant //\\n // S = sum of balances but x,y = 0 since x and y are the only tokens //\\n **************************************************************************************************************/\\n```\\n\\nthe function _calcSpotPrice hardcode the amp precision to 1e3;\\n```\\n uint256 internal constant _AMP_PRECISION = 1e3;\\n```\\n\\nand implement\\n```\\nuint256 a = (amplificationParameter * 2) / _AMP_PRECISION;\\n```\\n\\nif the pool's ampParameter is not equal to _AMP_PRECISION, the math will break.\\nPart Two\\nLet's trace the call in Boosted3TokenPoolUtils.sol\\nFirst the function reinvestReward in Boosted3TokenAuraHelper.sol is called\\n```\\n function reinvestReward(\\n Boosted3TokenAuraStrategyContext calldata context,\\n ReinvestRewardParams calldata params\\n ) \\n```\\n\\nThen we call\\n```\\n uint256 minBPT = context.poolContext._getMinBPT(\\n oracleContext, strategyContext, primaryAmount\\n );\\n```\\n\\nthen we call\\n```\\n minBPT = StableMath._calcBptOutGivenExactTokensIn({\\n amp: oracleContext.ampParam,\\n balances: balances,\\n amountsIn: amountsIn,\\n bptTotalSupply: virtualSupply,\\n swapFeePercentage: 0,\\n currentInvariant: invariant\\n });\\n```\\n\\nthen we call\\n```\\n // Get current and new invariants, taking swap fees into account\\n uint256 newInvariant = _calculateInvariant(amp, newBalances, false);\\n uint256 invariantRatio = newInvariant.divDown(currentInvariant);\\n```\\n\\nthen we call\\n```\\n uint256 ampTimesTotal = amplificationParameter * numTokens;\\n```\\n\\nwe just use the amplificationParameter without handling the precision.\\nThe amplificationParameter comes from BoostedTokenPoolMixin.sol\\n```\\n (\\n uint256 value,\\n /* bool isUpdating */,\\n /* uint256 precision */\\n ) = pool.getAmplificationParameter();\\n```\\n\\nthe isUpdating and precision is not used,\\nhowever, according to the documentation\\nAccording to the Balancer documentation\\npool.getGetAmplificationParameter()\\nreturns something resembling\\nvalue : 620000 isUpdating : False precision : 1000\\nwhere the amplification parameter is 620000 / 1000 = 620 | Issue getGetAmplificationParameter() precision is not used, which result in accounting issue in MetaStable2TokenAuraHelper.sol and in Boosted3TokenAuraHelper.sol\\nWe recommend the project use the precision returned from getGetAmplificationParameter()\\n```\\n (\\n uint256 value,\\n bool isUpdating */,\\n uint256 precision */\\n ) = IMetaStablePool(address(BALANCER_POOL_TOKEN)).getAmplificationParameter();\\n return value / precision;\\n```\\n | The amplificationParameter has precision, ignoring the precision will result in accounting issue.\\nIf the precision of the amplificationParameter is not equal to hardcoded 1e3, the spot price is invalid.\\nthe code\\n```\\n uint256 ampTimesTotal = amplificationParameter * numTokens;\\n```\\n\\nwill be overvalued because we did not divide the value by the precision. | ```\\n function reinvestReward(\\n MetaStable2TokenAuraStrategyContext calldata context,\\n ReinvestRewardParams calldata params\\n )\\n```\\n |
When one of the plugins is broken or paused, `deposit()` or `withdraw()` of the whole Vault contract can malfunction | medium | One malfunctioning plugin can result in the whole Vault contract malfunctioning.\\nA given plugin can temporally or even permanently becomes malfunctioning (cannot deposit/withdraw) for all sorts of reasons.\\nEg, Aave V2 Lending Pool can be paused, which will prevent multiple core functions that the Aave v2 plugin depends on from working, including `lendingPool.deposit()` and `lendingPool.withdraw()`.\\n```\\n modifier whenNotPaused() {\\n _whenNotPaused();\\n _;\\n }\\n```\\n\\n```\\n function withdraw(\\n address asset,\\n uint256 amount,\\n address to\\n ) external override whenNotPaused returns (uint256) {\\n```\\n\\nThat's because the deposit will always goes to the first plugin, and withdraw from the last plugin first. | Issue When one of the plugins is broken or paused, `deposit()` or `withdraw()` of the whole Vault contract can malfunction\\nConsider introducing a new method to pause one plugin from the Vault contract level;\\nAave V2's Lending Pool contract has a view function `paused()`, consider returning `0` for `availableForDeposit()` and ``availableForWithdrawal() when pool paused in AaveV2Plugin:\\n```\\nfunction availableForDeposit() public view override returns (uint256) {\\n if (lendingPool.paused()) return 0;\\n return type(uint256).max - balance();\\n}\\n```\\n\\n```\\nfunction availableForWithdrawal() public view override returns (uint256) {\\n if (lendingPool.paused()) return 0;\\n return balance();\\n}\\n```\\n | When Aave V2 Lending Pool is paused, users won't be able to deposit or withdraw from the vault.\\nNeither can the owner remove the plugin nor rebalanced it to other plugins to resume operation.\\nBecause withdrawal from the plugin can not be done, and removing a plugin or rebalancing both rely on this. | ```\\n modifier whenNotPaused() {\\n _whenNotPaused();\\n _;\\n }\\n```\\n |
`_withdrawFromPlugin()` will revert when `_withdrawalValues[i] == 0` | medium | When `_withdrawalValues[i] == 0` in `rebalancePlugins()`, it means NOT to rebalance this plugin.\\nHowever, the current implementation still tries to withdraw 0 from the plugin.\\nThis will revert in AaveV2Plugin as Aave V2's `validateWithdraw()` does not allow `0` withdrawals:\\n```\\n function validateWithdraw(\\n address reserveAddress,\\n uint256 amount,\\n uint256 userBalance,\\n mapping(address => DataTypes.ReserveData) storage reservesData,\\n DataTypes.UserConfigurationMap storage userConfig,\\n mapping(uint256 => address) storage reserves,\\n uint256 reservesCount,\\n address oracle\\n ) external view {\\n require(amount != 0, Errors.VL_INVALID_AMOUNT);\\n```\\n\\n`removePlugin()` will also always `_withdrawFromPlugin()` even if the plugin's balance is 0, as it will also tries to withdraw 0 in that case (balance is 0). | Only call `_withdrawFromPlugin()` when IPlugin(pluginAddr).balance() > 0:\\n```\\nfunction removePlugin(uint256 _index) external onlyOwner {\\n require(_index < pluginCount, "Index out of bounds");\\n address pluginAddr = plugins[_index];\\n if (IPlugin(pluginAddr).balance() > 0){\\n _withdrawFromPlugin(pluginAddr, IPlugin(pluginAddr).balance());\\n }\\n uint256 pointer = _index;\\n while (pointer < pluginCount - 1) {\\n plugins[pointer] = plugins[pointer + 1];\\n pointer++;\\n }\\n delete plugins[pluginCount - 1];\\n pluginCount--;\\n\\n IERC20(LINK).approve(pluginAddr, 0);\\n\\n emit PluginRemoved(pluginAddr);\\n}\\n```\\n\\n```\\nfunction rebalancePlugins(uint256[] memory _withdrawalValues) external onlyOwner {\\n require(_withdrawalValues.length == pluginCount, "Invalid withdrawal values");\\n for (uint256 i = 0; i < pluginCount; i++) {\\n if (_withdrawalValues[i] > 0)\\n _withdrawFromPlugin(plugins[i], _withdrawalValues[i]);\\n }\\n _distributeToPlugins();\\n}\\n```\\n | For AaveV2Plugin (and any future plugins that dont allow withdraw 0):\\nIn every rebalance call, it must at least withdraw 1 wei from the plugin for the rebalance to work.\\nThe plugin can not be removed or rebalanced when there is no balance in it.\\nIf such a plugin can not deposit for some reason (paused by gov, AaveV2Plugin may face that), this will further cause the whole system unable to be rebalanced until the deposit resumes for that plugin. | ```\\n function validateWithdraw(\\n address reserveAddress,\\n uint256 amount,\\n uint256 userBalance,\\n mapping(address => DataTypes.ReserveData) storage reservesData,\\n DataTypes.UserConfigurationMap storage userConfig,\\n mapping(uint256 => address) storage reserves,\\n uint256 reservesCount,\\n address oracle\\n ) external view {\\n require(amount != 0, Errors.VL_INVALID_AMOUNT);\\n```\\n |
Unregulated joining fees | medium | Observe the _deposit function\\nThis makes call to join function\\n```\\nfunction join(uint256 amount) external override joiningNotPaused {\\n uint256 fee = amount.mul(joiningFee).div(BASIS_PRECISION);\\n uint256 mintedAmount = mint(amount.sub(fee));\\n claimableFees = claimableFees.add(fee);\\n\\n // TODO: tx.origin will be deprecated in a future ethereum upgrade\\n latestJoinBlock[tx.origin] = block.number;\\n token.safeTransferFrom(msg.sender, address(this), amount);\\n\\n emit Joined(msg.sender, amount, mintedAmount);\\n }\\n```\\n\\nAs we can see this join function deducts a fees from the deposited amount before minting. Lets see this joining fees\\nThe joining fees is introduced using setJoiningFee function\\n```\\nfunction setJoiningFee(uint256 fee) external onlyOwner {\\n require(fee <= BASIS_PRECISION, "TrueFiPool: Fee cannot exceed transaction value");\\n joiningFee = fee;\\n emit JoiningFeeChanged(fee);\\n }\\n```\\n\\nThis means the joiningFee will always be in between 0 to BASIS_PRECISION. This BASIS_PRECISION can be 100% as shown\\n```\\nuint256 private constant BASIS_PRECISION = 10000;\\n```\\n\\nThis means if joiningFee is set to BASIS_PRECISION then all user deposit will goto joining fees with user getting nothing | Issue Unregulated joining fees\\nPost calling join, check amount of shares minted for this user (use balanceOF on TrueFiPool2.sol) and if it is below minimum expected revert the transaction\\n```\\nuint256 tfUsdcBalance = tfUSDC.balanceOf(address(this));\\nrequire(tfUsdcBalance>=minSharesExpected, "Too high fees");\\n```\\n | Contract will lose all deposited funds | ```\\nfunction join(uint256 amount) external override joiningNotPaused {\\n uint256 fee = amount.mul(joiningFee).div(BASIS_PRECISION);\\n uint256 mintedAmount = mint(amount.sub(fee));\\n claimableFees = claimableFees.add(fee);\\n\\n // TODO: tx.origin will be deprecated in a future ethereum upgrade\\n latestJoinBlock[tx.origin] = block.number;\\n token.safeTransferFrom(msg.sender, address(this), amount);\\n\\n emit Joined(msg.sender, amount, mintedAmount);\\n }\\n```\\n |
CTokenOracle.sol#getCErc20Price contains critical math error | high | CTokenOracle.sol#getCErc20Price contains a math error that immensely overvalues CTokens\\nCTokenOracle.sol#L66-L76\\n```\\nfunction getCErc20Price(ICToken cToken, address underlying) internal view returns (uint) {\\n /*\\n cToken Exchange rates are scaled by 10^(18 - 8 + underlying token decimals) so to scale\\n the exchange rate to 18 decimals we must multiply it by 1e8 and then divide it by the\\n number of decimals in the underlying token. Finally to find the price of the cToken we\\n must multiply this value with the current price of the underlying token\\n */\\n return cToken.exchangeRateStored()\\n .mulDivDown(1e8 , IERC20(underlying).decimals())\\n .mulWadDown(oracle.getPrice(underlying));\\n}\\n```\\n\\nIn L74, IERC20(underlying).decimals() is not raised to the power of 10. The results in the price of the LP being overvalued by many order of magnitudes. A user could deposit one CToken and drain the reserves of every liquidity pool. | Issue CTokenOracle.sol#getCErc20Price contains critical math error\\nFix the math error by changing L74:\\n```\\nreturn cToken.exchangeRateStored()\\n.mulDivDown(1e8 , 10 ** IERC20(underlying).decimals())\\n.mulWadDown(oracle.getPrice(underlying));\\n \\n```\\n\\nSentiment Team\\nFixed as recommended. PR here.\\nLead Senior Watson\\nConfirmed fix. | All lenders could be drained of all their funds due to excessive over valuation of CTokens cause by this error | ```\\nfunction getCErc20Price(ICToken cToken, address underlying) internal view returns (uint) {\\n /*\\n cToken Exchange rates are scaled by 10^(18 - 8 + underlying token decimals) so to scale\\n the exchange rate to 18 decimals we must multiply it by 1e8 and then divide it by the\\n number of decimals in the underlying token. Finally to find the price of the cToken we\\n must multiply this value with the current price of the underlying token\\n */\\n return cToken.exchangeRateStored()\\n .mulDivDown(1e8 , IERC20(underlying).decimals())\\n .mulWadDown(oracle.getPrice(underlying));\\n}\\n```\\n |
Protocol Reserve Within A LToken Vault Can Be Lent Out | medium | Protocol reserve, which serves as a liquidity backstop or to compensate the protocol, within a LToken vault can be lent out to the borrowers.\\nThe purpose of the protocol reserve within a LToken vault is to compensate the protocol or serve as a liquidity backstop. However, based on the current setup, it is possible for the protocol reserve within a Ltoken vault to be lent out.\\nThe following functions within the `LToken` contract show that the protocol reserve is intentionally preserved by removing the protocol reserve from the calculation of total assets within a `LToken` vault. As such, whenever the Liquidity Providers (LPs) attempt to redeem their LP token, the protocol reserves will stay intact and will not be withdrawn by the LPs.\\n```\\nfunction totalAssets() public view override returns (uint) {\\n return asset.balanceOf(address(this)) + getBorrows() - getReserves();\\n}\\n```\\n\\n```\\nfunction getBorrows() public view returns (uint) {\\n return borrows + borrows.mulWadUp(getRateFactor());\\n}\\n```\\n\\n```\\nfunction getReserves() public view returns (uint) {\\n return reserves + borrows.mulWadUp(getRateFactor())\\n .mulWadUp(reserveFactor);\\n}\\n```\\n\\nHowever, this measure is not applied consistently across the protocol. The following `lendTo` function shows that as long as the borrower has sufficient collateral to ensure their account remains healthy, the borrower could borrow as many assets from the LToken vault as they wish.\\nIn the worst-case scenario, the borrower can borrow all the assets from the LToken vault, including the protocol reserve.\\n```\\nFile: LToken.sol\\n /**\\n @notice Lends a specified amount of underlying asset to an account\\n @param account Address of account\\n @param amt Amount of token to lend\\n @return isFirstBorrow Returns if the account is borrowing the asset for\\n the first time\\n */\\n function lendTo(address account, uint amt)\\n external\\n whenNotPaused\\n accountManagerOnly\\n returns (bool isFirstBorrow)\\n {\\n updateState();\\n isFirstBorrow = (borrowsOf[account] == 0);\\n\\n uint borrowShares;\\n require((borrowShares = convertAssetToBorrowShares(amt)) != 0, "ZERO_BORROW_SHARES");\\n totalBorrowShares += borrowShares;\\n borrowsOf[account] += borrowShares;\\n\\n borrows += amt;\\n asset.safeTransfer(account, amt);\\n return isFirstBorrow;\\n }\\n```\\n | Issue Protocol Reserve Within A LToken Vault Can Be Lent Out\\nConsider updating the `lendTo` function to ensure that the protocol reserve is preserved and cannot be lent out. If the underlying asset of a LToken vault is less than or equal to the protocol reserve, the lending should be paused as it is more important to preserve the protocol reserve compared to lending them out.\\n```\\nfunction lendTo(address account, uint amt)\\n external\\n whenNotPaused\\n accountManagerOnly\\n returns (bool isFirstBorrow)\\n{\\n updateState();\\n isFirstBorrow = (borrowsOf[account] == 0);\\n \\n require\\n\\n uint borrowShares;\\n require((borrowShares = convertAssetToBorrowShares(amt)) != 0, "ZERO_BORROW_SHARES");\\n totalBorrowShares // Add the line below\\n= borrowShares;\\n borrowsOf[account] // Add the line below\\n= borrowShares;\\n\\n borrows // Add the line below\\n= amt;\\n asset.safeTransfer(account, amt);\\n \\n// Add the line below\\n require(asset.balanceOf(address(this)) >= getReserves(), "Not enough liquidity for lending") \\n \\n return isFirstBorrow;\\n}\\n```\\n\\nSentiment Team\\nWe removed reserves completely in this PR.\\nLead Senior Watson\\nConfirmed fix. | The purpose of the protocol reserve within a LToken vault is to compensate the protocol or serve as a liquidity backstop. Without the protocol reserve, the protocol will become illiquidity, and there is no fund to compensate the protocol. | ```\\nfunction totalAssets() public view override returns (uint) {\\n return asset.balanceOf(address(this)) + getBorrows() - getReserves();\\n}\\n```\\n |
ERC4626Oracle Vulnerable To Price Manipulation | medium | ERC4626 oracle is vulnerable to price manipulation. This allows an attacker to increase or decrease the price to carry out various attacks against the protocol.\\nThe `getPrice` function within the `ERC4626Oracle` contract is vulnerable to price manipulation because the price can be increased or decreased within a single transaction/block.\\nBased on the `getPrice` function, the price of the LP token of an ERC4626 vault is dependent on the `ERC4626.previewRedeem` and `oracleFacade.getPrice` functions. If the value returns by either `ERC4626.previewRedeem` or `oracleFacade.getPrice` can be manipulated within a single transaction/block, the price of the LP token of an ERC4626 vault is considered to be vulnerable to price manipulation.\\n```\\nFile: ERC4626Oracle.sol\\n function getPrice(address token) external view returns (uint) {\\n uint decimals = IERC4626(token).decimals();\\n return IERC4626(token).previewRedeem(\\n 10 ** decimals\\n ).mulDivDown(\\n oracleFacade.getPrice(IERC4626(token).asset()),\\n 10 ** decimals\\n );\\n }\\n```\\n\\nIt was observed that the `ERC4626.previewRedeem` couldbe manipulated within a single transaction/block. As shown below, the `previewRedeem` function will call the `convertToAssets` function. Within the `convertToAssets`, the number of assets per share is calculated based on the current/spot total assets and current/spot supply that can be increased or decreased within a single block/transaction by calling the vault's deposit, mint, withdraw or redeem functions. This allows the attacker to artificially inflate or deflate the price within a single block/transaction.\\n```\\nFile: ERC4626.sol\\n function previewRedeem(uint256 shares) public view virtual returns (uint256) {\\n return convertToAssets(shares);\\n }\\n```\\n\\n```\\nFile: ERC4626.sol\\n function convertToAssets(uint256 shares) public view virtual returns (uint256) {\\n uint256 supply = totalSupply; // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? shares : shares.mulDivDown(totalAssets(), supply);\\n }\\n```\\n | Avoid using `previewRedeem` function to calculate the price of the LP token of an ERC4626 vault. Consider implementing TWAP so that the price cannot be inflated or deflated within a single block/transaction or within a short period of time.\\nSentiment Team\\nDepends on the integration itself, so there's no action that can be taken right now.\\nLead Senior Watson\\nAcknowledged. | The attacker could perform price manipulation to make the apparent value of an asset to be much higher or much lower than the true value of the asset. Following are some risks of price manipulation:\\nAn attacker can increase the value of their collaterals to increase their borrowing power so that they can borrow more assets than they are allowed from Sentiment.\\nAn attacker can decrease the value of some collaterals and attempt to liquidate another user account prematurely. | ```\\nFile: ERC4626Oracle.sol\\n function getPrice(address token) external view returns (uint) {\\n uint decimals = IERC4626(token).decimals();\\n return IERC4626(token).previewRedeem(\\n 10 ** decimals\\n ).mulDivDown(\\n oracleFacade.getPrice(IERC4626(token).asset()),\\n 10 ** decimals\\n );\\n }\\n```\\n |
`Reserves` should not be considered part of the available liquidity while calculating the interest rate | medium | The implementation is different from the documentation regarding the interest rate formula.\\nThe formula given in the docs:\\nCalculates Borrow rate per second:\\n$$ Borrow Rate Per Second = c3 \\cdot (util \\cdot c1 + util^{32} \\cdot c1 + util^{64} \\cdot c2) \\div secsPerYear $$\\nwhere, $util = borrows \\div (liquidity - reserves + borrows)$\\n$$ util=borrows \\div (liquidity−reserves+borrows) $$\\n```\\n function getRateFactor() internal view returns (uint) {\\n return (block.timestamp == lastUpdated) ?\\n 0 :\\n ((block.timestamp - lastUpdated)*1e18)\\n .mulWadUp(\\n rateModel.getBorrowRatePerSecond(\\n asset.balanceOf(address(this)),\\n borrows\\n )\\n );\\n }\\n```\\n\\nHowever, the current implementation is taking all the balance as the liquidity:\\n```\\n function getBorrowRatePerSecond(\\n uint liquidity,\\n uint borrows\\n )\\n external\\n view\\n returns (uint)\\n {\\n uint util = _utilization(liquidity, borrows);\\n return c3.mulDivDown(\\n (\\n util.mulWadDown(c1)\\n + util.rpow(32, SCALE).mulWadDown(c1)\\n + util.rpow(64, SCALE).mulWadDown(c2)\\n ),\\n secsPerYear\\n );\\n }\\n```\\n\\n```\\n function _utilization(uint liquidity, uint borrows)\\n internal\\n pure\\n returns (uint)\\n {\\n uint totalAssets = liquidity + borrows;\\n return (totalAssets == 0) ? 0 : borrows.divWadDown(totalAssets);\\n }\\n```\\n | Issue `Reserves` should not be considered part of the available liquidity while calculating the interest rate\\nThe implementation of `getRateFactor()` can be updated to:\\n```\\nfunction getRateFactor() internal view returns (uint) {\\n return (block.timestamp == lastUpdated) ?\\n 0 :\\n ((block.timestamp - lastUpdated)*1e18)\\n .mulWadUp(\\n rateModel.getBorrowRatePerSecond(\\n asset.balanceOf(address(this)) - reserves,\\n borrows\\n )\\n );\\n}\\n```\\n\\nSentiment Team\\nRemoved reserves from LToken and added an alternate mechanism to collect direct fees.\\nLead Senior Watson\\noriginationFee may result in the borrower account becoming liquidatable immediately (aka WP-M2).\\nSentiment Team\\nFixed as recommended. PR here.\\nLead Senior Watson\\nriskEngine.isBorrowAllowed should be removed as it's no longer needed.\\nSentiment Team\\nPushed a commit to remove the redundant call to riskEngine. PR here. | Per the docs, when calculating the interest rate, `util` is the ratio of available liquidity to the `borrows`, available liquidity should not include reserves.\\nThe current implementation is using all the balance as the `liquidity`, this will make the interest rate lower than expectation.\\nPoC\\nGiven:\\n`asset.address(this) + borrows = 10000`\\n`reserves = 1500, borrows = 7000`\\nExpected result:\\nWhen calculating `getRateFactor()`, available liquidity should be `asset.balanceOf(address(this)) - reserves = 1500, util = 7000 / 8500 = 0.82`, `getBorrowRatePerSecond() = 9114134329`\\nActual result:\\nWhen calculating `getRateFactor()`, `asset.balanceOf(address(this)) = 3000, util = 0.7e18`, `getBorrowRatePerSecond() = 7763863430`\\nThe actual interest rate is only `7763863430 / 9114134329 = 85%` of the expected rate. | ```\\n function getRateFactor() internal view returns (uint) {\\n return (block.timestamp == lastUpdated) ?\\n 0 :\\n ((block.timestamp - lastUpdated)*1e18)\\n .mulWadUp(\\n rateModel.getBorrowRatePerSecond(\\n asset.balanceOf(address(this)),\\n borrows\\n )\\n );\\n }\\n```\\n |
LToken's implmentation is not fully up to EIP-4626's specification | medium | Note: This issue is a part of the extra scope added by Sentiment AFTER the audit contest. This scope was only reviewed by WatchPug and relates to these three PRs:\\nLending deposit cap\\nFee accrual modification\\nCRV staking\\nLToken's implmentation is not fully up to EIP-4626's specification. This issue is would actually be considered a Low issue if it were a part of a Sherlock contest.\\n```\\nfunction maxMint(address) public view virtual returns (uint256) {\\n return type(uint256).max;\\n}\\n```\\n\\nMUST return the maximum amount of shares mint would allow to be deposited to receiver and not cause a revert, which MUST NOT be higher than the actual maximum that would be accepted (it should underestimate if necessary). This assumes that the user has infinite assets, i.e. MUST NOT rely on balanceOf of asset.\\nmaxMint() and maxDeposit() should reflect the limitation of maxSupply. | maxMint() and maxDeposit() should reflect the limitation of maxSupply.\\nConsider changing maxMint() and maxDeposit() to:\\n```\\nfunction maxMint(address) public view virtual returns (uint256) {\\n if (totalSupply >= maxSupply) {\\n return 0;\\n }\\n return maxSupply - totalSupply;\\n}\\n```\\n\\n```\\nfunction maxDeposit(address) public view virtual returns (uint256) {\\n return convertToAssets(maxMint(address(0)));\\n}\\n```\\n\\nSentiment Team\\nFixed as recommended. PR here.\\nLead Senior Watson\\nConfirmed fix. | Could cause unexpected behavior in the future due to non-compliance with EIP-4626 standard. | ```\\nfunction maxMint(address) public view virtual returns (uint256) {\\n return type(uint256).max;\\n}\\n```\\n |
`UniV2LPOracle` will malfunction if token0 or token1's `decimals != 18` | high | When one of the LP token's underlying tokens `decimals` is not 18, the price of the LP token calculated by `UniV2LPOracle` will be wrong.\\n`UniV2LPOracle` is an implementation of Alpha Homora v2's Fair Uniswap's LP Token Pricing Formula:\\nThe Formula ... of combining fair asset prices and fair asset reserves:\\n$$ P = 2\\cdot \\frac{\\sqrt{r_0 \\cdot r_1} \\cdot \\sqrt{p_0\\cdot p_1}}{totalSupply}, $$\\nwhere $r_i$ is the asset ii's pool balance and $p_i$ is the asset $i$'s fair price.\\nHowever, the current implementation wrongful assumes $r_0$ and $r_1$ are always in 18 decimals.\\n```\\nfunction getPrice(address pair) external view returns (uint) {\\n (uint r0, uint r1,) = IUniswapV2Pair(pair).getReserves();\\n\\n // 2 * sqrt(r0 * r1 * p0 * p1) / totalSupply\\n return FixedPointMathLib.sqrt(\\n r0\\n .mulWadDown(r1)\\n .mulWadDown(oracle.getPrice(IUniswapV2Pair(pair).token0()))\\n .mulWadDown(oracle.getPrice(IUniswapV2Pair(pair).token1()))\\n )\\n .mulDivDown(2e27, IUniswapV2Pair(pair).totalSupply());\\n}\\n```\\n\\n```\\nuint256 internal constant WAD = 1e18; // The scalar of ETH and most ERC20s.\\n\\nfunction mulWadDown(uint256 x, uint256 y) internal pure returns (uint256) {\\n return mulDivDown(x, y, WAD); // Equivalent to (x * y) / WAD rounded down.\\n}\\n```\\n\\n```\\nfunction mulDivDown(\\n uint256 x,\\n uint256 y,\\n uint256 denominator\\n) internal pure returns (uint256 z) {\\n assembly {\\n // Store x * y in z for now.\\n z := mul(x, y)\\n\\n // Equivalent to require(denominator != 0 && (x == 0 || (x * y) / x == y))\\n if iszero(and(iszero(iszero(denominator)), or(iszero(x), eq(div(z, x), y)))) {\\n revert(0, 0)\\n }\\n\\n // Divide z by the denominator.\\n z := div(z, denominator)\\n }\\n}\\n```\\n | Issue `UniV2LPOracle` will malfunction if token0 or token1's `decimals != 18`\\nConsider normalizing r0 and r1 to 18 decimals before using them in the formula.\\nSentiment Team\\nFixed as recommended. PRs here and here.\\nLead Senior Watson\\nConfirmed fix. | When the decimals of one or both tokens in the pair is not 18, the price will be way off. | ```\\nfunction getPrice(address pair) external view returns (uint) {\\n (uint r0, uint r1,) = IUniswapV2Pair(pair).getReserves();\\n\\n // 2 * sqrt(r0 * r1 * p0 * p1) / totalSupply\\n return FixedPointMathLib.sqrt(\\n r0\\n .mulWadDown(r1)\\n .mulWadDown(oracle.getPrice(IUniswapV2Pair(pair).token0()))\\n .mulWadDown(oracle.getPrice(IUniswapV2Pair(pair).token1()))\\n )\\n .mulDivDown(2e27, IUniswapV2Pair(pair).totalSupply());\\n}\\n```\\n |
Tokens received from Curve's `remove_liquidity()` should be added to the assets list even if `_min_amounts` are set to `0` | high | Curve controller's `canRemoveLiquidity()` should return all the underlying tokens as `tokensIn` rather than only the tokens with `minAmount > 0`.\\n```\\nfunction canRemoveLiquidity(address target, bytes calldata data)\\n internal\\n view\\n returns (bool, address[] memory, address[] memory)\\n{\\n (,uint256[2] memory amounts) = abi.decode(\\n data[4:],\\n (uint256, uint256[2])\\n );\\n\\n address[] memory tokensOut = new address[](1);\\n tokensOut[0] = target;\\n\\n uint i; uint j;\\n address[] memory tokensIn = new address[](2);\\n while(i < 2) {\\n if(amounts[i] > 0)\\n tokensIn[j++] = IStableSwapPool(target).coins(i);\\n unchecked { ++i; }\\n }\\n assembly { mstore(tokensIn, j) }\\n\\n return (true, tokensIn, tokensOut);\\n}\\n```\\n\\nThe `amounts` in Curve controller's `canRemoveLiquidity()` represent the "Minimum `amounts` of underlying coins to receive", which is used for slippage control.\\nAt L144-149, only the tokens that specified a minAmount > 0 will be added to the `tokensIn` list, which will later be added to the account's assets list.\\nWe believe this is wrong as regardless of the minAmount `remove_liquidity()` will always receive all the underlying tokens.\\nTherefore, it should not check and only add the token when it's minAmount > 0. | `canRemoveLiquidity()` can be changed to:\\n```\\nfunction canRemoveLiquidity(address target, bytes calldata data)\\n internal\\n view\\n returns (bool, address[] memory, address[] memory)\\n{\\n address[] memory tokensOut = new address[](1);\\n tokensOut[0] = target;\\n\\n address[] memory tokensIn = new address[](2);\\n tokensIn[0] = IStableSwapPool(target).coins(0);\\n tokensIn[1] = IStableSwapPool(target).coins(1);\\n return (true, tokensIn, tokensOut);\\n}\\n```\\n\\nSentiment Team\\nFixed as recommended. PR here.\\nLead Senior Watson\\nConfirmed fix. | When the user set `_min_amounts` = `0` while removing liquidity from `Curve` and the withdrawn tokens are not in the account's assets list already, the user may get liquidated sooner than expected as `RiskEngine.sol#_getBalance()` only counts in the assets in the assets list. | ```\\nfunction canRemoveLiquidity(address target, bytes calldata data)\\n internal\\n view\\n returns (bool, address[] memory, address[] memory)\\n{\\n (,uint256[2] memory amounts) = abi.decode(\\n data[4:],\\n (uint256, uint256[2])\\n );\\n\\n address[] memory tokensOut = new address[](1);\\n tokensOut[0] = target;\\n\\n uint i; uint j;\\n address[] memory tokensIn = new address[](2);\\n while(i < 2) {\\n if(amounts[i] > 0)\\n tokensIn[j++] = IStableSwapPool(target).coins(i);\\n unchecked { ++i; }\\n }\\n assembly { mstore(tokensIn, j) }\\n\\n return (true, tokensIn, tokensOut);\\n}\\n```\\n |
Accounts with ETH loans can not be liquidated if LEther's underlying is set to `address(0)` | medium | Setting `address(0)` as LEther's `underlying` is allowed, and the logic in `AccountManager#settle()` and `RiskEngine#_valueInWei()` handles `address(0)` specially, which implies that `address(0)` can be an asset.\\nHowever, if LEther's underlying is set to `address(0)`, the accounts with ETH loans will become unable to be liquidated.\\nGiven that at `AccountManager.sol#L100` in `settle()` and `RiskEngine.sol#L186` in `_valueInWei()`, they both handled the case that the `asset == address(0)`, and in `Registry.sol#setLToken()`, `underlying == address(0)` is allowed:\\nWe assume that `address(0)` can be set as the `underlying` of `LEther`.\\nIn that case, when the user borrows native tokens, `address(0)` will be added to the user's assets and borrows list.\\n```\\nfunction borrow(address account, address token, uint amt)\\n external\\n whenNotPaused\\n onlyOwner(account)\\n{\\n if (registry.LTokenFor(token) == address(0))\\n revert Errors.LTokenUnavailable();\\n if (!riskEngine.isBorrowAllowed(account, token, amt))\\n revert Errors.RiskThresholdBreached();\\n if (IAccount(account).hasAsset(token) == false)\\n IAccount(account).addAsset(token);\\n if (ILToken(registry.LTokenFor(token)).lendTo(account, amt))\\n IAccount(account).addBorrow(token);\\n emit Borrow(account, msg.sender, token, amt);\\n}\\n```\\n\\nThis will later prevent the user from being liquidated because in `riskEngine.isAccountHealthy()`, it calls `_getBalance()` in the for loop of all the assets, which assumes all the assets complies with `IERC20`. Thus, the transaction will revert at L157 when calling `IERC20(address(0)).balanceOf(account)`.\\n```\\nfunction liquidate(address account) external {\\n if (riskEngine.isAccountHealthy(account))\\n revert Errors.AccountNotLiquidatable();\\n _liquidate(account);\\n emit AccountLiquidated(account, registry.ownerFor(account));\\n}\\n```\\n\\n```\\nfunction _getBalance(address account) internal view returns (uint) {\\n address[] memory assets = IAccount(account).getAssets();\\n uint assetsLen = assets.length;\\n uint totalBalance;\\n for(uint i; i < assetsLen; ++i) {\\n totalBalance += _valueInWei(\\n assets[i],\\n IERC20(assets[i]).balanceOf(account)\\n );\\n }\\n return totalBalance + account.balance;\\n}\\n```\\n | Issue Accounts with ETH loans can not be liquidated if LEther's underlying is set to `address(0)`\\nConsider removing the misleading logic in `AccountManager#settle()` and `RiskEngine#_valueInWei()` that handles `address(0)` as an asset;\\nConsider disallowing adding `address(0)` as `underlying` in `setLToken()`.\\nSentiment Team\\nFixed as recommended. PR here.\\nLead Senior Watson\\nConfirmed fix. | We noticed that in the deployment documentation, LEther is set to init with WETH as the `underlying`. Therefore, this should not be an issue if the system is being deployed correctly.\\n```\\n1. ETH\\n 1. Deploy LEther implementation\\n 2. Deploy Proxy(LEther)\\n 3. call init(WETH), "LEther", "LEth", IRegistry, reserveFactor)\\n 4. call Registry.setLToken(WETH, Proxy)\\n 5. call accountManager.toggleCollateralStatus(token)\\n 6. call Proxy.initDep()\\n```\\n\\nBut considering that setting `address(0)` as LEther's `underlying` is still plausible and the potential damage to the whole protocol is high (all the accounts with ETH loans can not be liquidated), we believe that this should be a medium severity issue. | ```\\nfunction borrow(address account, address token, uint amt)\\n external\\n whenNotPaused\\n onlyOwner(account)\\n{\\n if (registry.LTokenFor(token) == address(0))\\n revert Errors.LTokenUnavailable();\\n if (!riskEngine.isBorrowAllowed(account, token, amt))\\n revert Errors.RiskThresholdBreached();\\n if (IAccount(account).hasAsset(token) == false)\\n IAccount(account).addAsset(token);\\n if (ILToken(registry.LTokenFor(token)).lendTo(account, amt))\\n IAccount(account).addBorrow(token);\\n emit Borrow(account, msg.sender, token, amt);\\n}\\n```\\n |
Missing revert keyword | medium | Missing `revert` keyword in `functionDelegateCall` bypasses an intended safety check, allowing the function to fail silently.\\nIn the helper function `functionDelegateCall`, there is a check to confirm that the target being called is a contract.\\n```\\nif (!isContract(target)) Errors.AddressNotContract;\\n```\\n\\nHowever, there is a typo in the check that is missing the `revert` keyword.\\nAs a result, non-contracts can be submitted as targets, which will cause the delegatecall below to return success (because EVM treats no code as STOP opcode), even though it doesn't do anything.\\n```\\n(bool success, ) = target.delegatecall(data);\\nrequire(success, "CALL_FAILED");\\n```\\n | Issue Missing revert keyword\\nAdd missing `revert` keyword to L70 of Helpers.sol.\\n```\\nif (!isContract(target)) revert Errors.AddressNotContract;\\n```\\n\\nSentiment Team\\nFixed as recommended. PR here.\\nLead Senior Watson\\nConfirmed fix. | The code doesn't accomplish its intended goal of checking to confirm that only contracts are passed as targets, so delegatecalls can silently fail. | ```\\nif (!isContract(target)) Errors.AddressNotContract;\\n```\\n |
No Limit for Minting Amount | high | In token contract `FiatTokenV1`, there is no limit set for amount of tokens can be minted, as a result, the minter can mint unlimited tokens, disrupting the token supply and value.\\n```\\nfunction mint(address to, uint256 amount) public onlyRole(MINTER\\_ROLE) {\\n \\_mint(to, amount);\\n}\\n```\\n | Add a limit for the number of tokens the minter can mint. | null | ```\\nfunction mint(address to, uint256 amount) public onlyRole(MINTER\\_ROLE) {\\n \\_mint(to, amount);\\n}\\n```\\n |
Private Key Is Exposed in the Deployment and Upgrade Script | high | In the contract deploying and upgrading script, private key is used to broadcast the transaction, this would expose private key of the deployer and upgrader account on the machine running the script, therefore compromising these accounts.\\n```\\nuint256 deployerPrivateKey = vm.envUint("PRIVATE\\_KEY");\\n```\\n\\n```\\nuint256 deployerPrivateKey = vm.envUint("PRIVATE\\_KEY");\\nvm.startBroadcast(deployerPrivateKey);\\n```\\n | Have Forge sending a raw transaction to the cold wallet of the account, the wallet signs the transaction then return the signed transactions to Forge and broadcaster. Alternatively use different wallet for deployment and upgrade and stop using the wallet after the script is complete | null | ```\\nuint256 deployerPrivateKey = vm.envUint("PRIVATE\\_KEY");\\n```\\n |
Critical Functions Are Public and Without Access Control | medium | Critical functions in RescuableV1(rescue) and `BlacklistableV1` `(blacklist,unblacklist)` are public and unauthenticated, any one can call these function to steal funds and blacklist other accounts. Although the child contract `FiatTokenV1` has authenticated the overridden functions and protected them from public access, other contracts inheriting `RescuableV1` and `BlacklistableV1` might have risks from the unauthenticated public functions\\n```\\nfunction rescue(IERC20 token, address to, uint256 amount) public virtual {\\n```\\n\\n```\\nfunction blacklist(address account) public virtual {\\n \\_blacklisted[account] = true;\\n emit Blacklisted(account);\\n}\\n\\n/\\*\\*\\n \\* @dev Removes account from blacklist\\n \\* @param account The address to remove from the blacklist\\n \\*/\\nfunction unBlacklist(address account) public virtual {\\n \\_blacklisted[account] = false;\\n emit UnBlacklisted(account);\\n}\\n```\\n | Make these functions internal and in the child contract add correspondent public function with authentication to call the inherited functions | null | ```\\nfunction rescue(IERC20 token, address to, uint256 amount) public virtual {\\n```\\n |
Unecessary Parent Contracts | low | Contract `BlacklistableV1` and `RescuableV1` extends `ContextUpgradeable` and `ERC20Upgradeable,` which are not used in any of contract functions and are already inherited by the child contract `FiatTokenV1`.\\n```\\nabstract contract BlacklistableV1 is Initializable, ContextUpgradeable, ERC20Upgradeable {\\n```\\n\\n```\\nabstract contract RescuableV1 is Initializable, ContextUpgradeable, ERC20Upgradeable {\\n```\\n\\n```\\ncontract FiatTokenV1 is\\n Initializable,\\n ERC20Upgradeable,\\n ERC20PausableUpgradeable,\\n ERC20BurnableUpgradeable,\\n AccessControlUpgradeable,\\n ERC20PermitUpgradeable,\\n UUPSUpgradeable,\\n BlacklistableV1,\\n RescuableV1\\n{\\n```\\n | Remove the unnecessary parent contracts | null | ```\\nabstract contract BlacklistableV1 is Initializable, ContextUpgradeable, ERC20Upgradeable {\\n```\\n |
Redundant _disableInitializers in Constructor | low | Contract `FiatTokenV1` inherits from contracts `BlacklistableV1` and `RescuableV1`, the two parent contracts both have `_disableInitializers` in their constructors to prevent uninitialized contract being initialized by the attackers, it's not necessary to have `_disableInitializers` in the FiatTokenV1's constructor, which is redundant and inefficient.\\n```\\nconstructor() {\\n \\_disableInitializers();\\n}\\n```\\n | Remove constructor from `FiatTokenV1` | null | ```\\nconstructor() {\\n \\_disableInitializers();\\n}\\n```\\n |
Incorrect Final Block Number Can Be Finalized | high | In the data finalization function `finalizeCompressedBlocksWithProof`, `finalizationData.finalBlockNumber` is the final block number of the compressed block data to be finalized. However, there is no check in the contract or the prover to ensure `finalBlockNumber` is correct when there is no new data submitted in the finalization, i.e., `submissionDataLength == 0` . The prover can submit an incorrect final block number and, as a result, the finalized block number (currentL2BlockNumber) would be incorrect. Consequently, the prover can skip block data in the finalization.\\n```\\ncurrentL2BlockNumber = \\_finalizationData.finalBlockNumber;\\n```\\n\\n```\\nif (stateRootHashes[currentL2BlockNumber] != \\_finalizationData.parentStateRootHash) {\\n revert StartingRootHashDoesNotMatch();\\n}\\n```\\n | Resolution\\nfixed by adding a recommended check of `finalBlockNumber` matching the last block number of the submitted data in `_finalizeCompressedBlocks` and a check in the prover and adding `finalBlockNumber` and `lastFinalizedBlockNumber` in the public input of the verifier in the finalization in PR-24\\nIn `_finalizeCompressedBlocks`, check if `finalBlockNumber` is equal to the last block number (finalBlockInData) of the last item of submitted block data. Another solution is to have the prover show that finalBlockNumberis correct in the proof by providing the last finalized block number (lastFinalizedBlockNumber) and verify it by adding `finalBlockNumber` and `lastFinalizedBlockNumber` in the public input of the verifier in the finalization. | null | ```\\ncurrentL2BlockNumber = \\_finalizationData.finalBlockNumber;\\n```\\n |
Finalization Fails for the First Batch of Data Submitted After Migration to the Updated Contract | high | When submitting the initial batch of compressed block data after the contract update, the finalization will fail.\\nIn function `_finalizeCompressedBlocks`, `startingDataParentHash = dataParents[_finalizationData.dataHashes[0]]` will be empty and, therefore, `startingParentFinalStateRootHash = dataFinalStateRootHashes[startingDataParentHash]` will be empty too. The check `_finalizationData.parentStateRootHash == stateRootHashes[currentL2BlockNumber]` requires `_finalizationData.parentStateRootHash == _initialStateRootHash`, which is not empty, so the condition `startingParentFinalStateRootHash != _finalizationData.parentStateRootHash` is true, and we revert with the error FinalStateRootHashDoesNotMatch:\\n```\\nif (stateRootHashes[currentL2BlockNumber] != \\_finalizationData.parentStateRootHash) {\\n revert StartingRootHashDoesNotMatch();\\n}\\n```\\n\\n```\\nif (finalizationDataDataHashesLength != 0) {\\n bytes32 startingDataParentHash = dataParents[\\_finalizationData.dataHashes[0]];\\n\\n if (startingDataParentHash != \\_finalizationData.dataParentHash) {\\n revert ParentHashesDoesNotMatch(startingDataParentHash, \\_finalizationData.dataParentHash);\\n }\\n\\n bytes32 startingParentFinalStateRootHash = dataFinalStateRootHashes[startingDataParentHash];\\n\\n if (startingParentFinalStateRootHash != \\_finalizationData.parentStateRootHash) {\\n revert FinalStateRootHashDoesNotMatch(startingParentFinalStateRootHash, \\_finalizationData.parentStateRootHash);\\n }\\n```\\n | Set the correct initial value for `dataFinalStateRootHashes` for the initial batch of compressed block data. | null | ```\\nif (stateRootHashes[currentL2BlockNumber] != \\_finalizationData.parentStateRootHash) {\\n revert StartingRootHashDoesNotMatch();\\n}\\n```\\n |
Prover Can Censor L2 → L1 Messages Partially Addressed | high | In L2 → L1 messaging, messages are grouped and added to a Merkle tree by the prover. During finalization, the operator (coordinator) submits the Merkle root to L1, and the user SDK rebuilds the tree to which the message is added and generates a Merkle proof to claim against the root finalized on L1. However, the prover can skip messages when building the tree. Consequently, the user cannot claim the skipped message, which might result in frozen funds.\\nCurrently, the prover is a single entity owned by Linea. Hence, this would require malice or negligence on Linea's part.\\n```\\n\\_addL2MerkleRoots(\\_finalizationData.l2MerkleRoots, \\_finalizationData.l2MerkleTreesDepth);\\n\\_anchorL2MessagingBlocks(\\_finalizationData.l2MessagingBlocksOffsets, lastFinalizedBlock);\\n```\\n | Decentralize the prover, so messages can be included by different provers. | null | ```\\n\\_addL2MerkleRoots(\\_finalizationData.l2MerkleRoots, \\_finalizationData.l2MerkleTreesDepth);\\n\\_anchorL2MessagingBlocks(\\_finalizationData.l2MessagingBlocksOffsets, lastFinalizedBlock);\\n```\\n |
Malicious Operator Might Finalize Data From a Forked Linea Chain | high | A malicious operator (prover) can add and finalize block data from a forked Linea chain, so transactions on the forked chain can be finalized, causing a loss of funds from the L1.\\nFor example, a malicious operator forks the canonical chain, then the attacker sends the forked chain Ether to L1 with `sendMessage` from the forked L2. The operator then submits the block data to L1 and finalizes it with `finalizeCompressedBlocksWithProof`, using the finalization data and proof from the forked chain. (Note that the malicious prover sets the forked chain `chainId` in its circuit as a constant.) The L1 contract (LineaRollup) doesn't know whether the data and the proof are from the canonical L2 or the forked one. The finalization succeeds, and the attacker can claim the bridged forked chain Ether and steal funds from L1.\\nAs there is currently only one operator and it is owned by the Linea team, this kind of attack is unlikely to happen. However, when the operator and the coordinator are decentralized, the likelihood of this attack increases.\\n```\\nuint256 publicInput = uint256(\\n keccak256(\\n abi.encode(\\n shnarf,\\n \\_finalizationData.parentStateRootHash,\\n \\_finalizationData.lastFinalizedTimestamp,\\n \\_finalizationData.finalBlockNumber,\\n \\_finalizationData.finalTimestamp,\\n \\_finalizationData.l1RollingHash,\\n \\_finalizationData.l1RollingHashMessageNumber,\\n keccak256(abi.encodePacked(\\_finalizationData.l2MerkleRoots))\\n )\\n```\\n\\n```\\n\\_addL2MerkleRoots(\\_finalizationData.l2MerkleRoots, \\_finalizationData.l2MerkleTreesDepth);\\n```\\n | Add `chainId` in the `FinalizationData` as a public input of the verifier function `_verifyProof`, so the proof from the forked Linea chain will not pass the verification because the `chainId` won't match. | null | ```\\nuint256 publicInput = uint256(\\n keccak256(\\n abi.encode(\\n shnarf,\\n \\_finalizationData.parentStateRootHash,\\n \\_finalizationData.lastFinalizedTimestamp,\\n \\_finalizationData.finalBlockNumber,\\n \\_finalizationData.finalTimestamp,\\n \\_finalizationData.l1RollingHash,\\n \\_finalizationData.l1RollingHashMessageNumber,\\n keccak256(abi.encodePacked(\\_finalizationData.l2MerkleRoots))\\n )\\n```\\n |
The Compressed Block Data Is Not Verified Against Data in the Prover During Data Submission Acknowledged | medium | When the sequencer submits the batched block data with the `submitData` function, it's expected to check that the submitted commitment of the compressed block data `keccak(_submissionData.compressedData)` and the commitment of the block data used in the prover (snarkHash) commit to the same data. This is done by proof of equivalence; the `x` is calculated by hashing `keccak(_submissionData.compressedData)` and `snarkHash`, and `y` is provided by the prover. Then it's verified that `P(x) = y`, where `P` is a polynomial that encodes the compressed data (_submissionData.compressedData). However, in the `submitData` function, `y` is evaluated by `_calculateY` but it is not checked against the `y` provided by the prover. In fact, the prover doesn't provide `y` to the function; instead `x` and `y` are provided to the prover who would evaluate `y'` and compare it with `y` from the contract, then `x` and `y` are included in the public input for the proof verification in the finalization.\\n```\\nshnarf = keccak256(\\n abi.encode(\\n shnarf,\\n _submissionData.snarkHash,\\n _submissionData.finalStateRootHash,\\n compressedDataComputedX,\\n _calculateY(_submissionData.compressedData, compressedDataComputedX)\\n )\\n ); \\n```\\n\\nThe only difference is if the two commitments don't commit to the same block data (meaning the data submitted doesn't match the data used in the prover), `submitData` would fail - while in the current implementation, it would fail in the proof verification during the finalization. As a result, if the data submitted doesn't match the data in the prover in the finalization, the operator has to submit the correct data again in order to finalize it. Linea stated they will verify it in the data submission, once EIP-4844 is implemented.\\n```\\nfunction \\_submitData(SubmissionData calldata \\_submissionData) internal returns (bytes32 shnarf) {\\n shnarf = dataShnarfHashes[\\_submissionData.dataParentHash];\\n\\n bytes32 parentFinalStateRootHash = dataFinalStateRootHashes[\\_submissionData.dataParentHash];\\n uint256 lastFinalizedBlock = currentL2BlockNumber;\\n\\n if (\\_submissionData.firstBlockInData <= lastFinalizedBlock) {\\n revert FirstBlockLessThanOrEqualToLastFinalizedBlock(\\_submissionData.firstBlockInData, lastFinalizedBlock);\\n }\\n\\n if (\\_submissionData.firstBlockInData > \\_submissionData.finalBlockInData) {\\n revert FirstBlockGreaterThanFinalBlock(\\_submissionData.firstBlockInData, \\_submissionData.finalBlockInData);\\n }\\n\\n if (\\_submissionData.parentStateRootHash != parentFinalStateRootHash) {\\n revert StateRootHashInvalid(parentFinalStateRootHash, \\_submissionData.parentStateRootHash);\\n }\\n\\n bytes32 currentDataHash = keccak256(\\_submissionData.compressedData);\\n\\n if (dataFinalStateRootHashes[currentDataHash] != EMPTY\\_HASH) {\\n revert DataAlreadySubmitted(currentDataHash);\\n }\\n\\n dataParents[currentDataHash] = \\_submissionData.dataParentHash;\\n dataFinalStateRootHashes[currentDataHash] = \\_submissionData.finalStateRootHash;\\n\\n bytes32 compressedDataComputedX = keccak256(abi.encode(\\_submissionData.snarkHash, currentDataHash));\\n\\n shnarf = keccak256(\\n abi.encode(\\n shnarf,\\n \\_submissionData.snarkHash,\\n \\_submissionData.finalStateRootHash,\\n compressedDataComputedX,\\n \\_calculateY(\\_submissionData.compressedData, compressedDataComputedX)\\n )\\n );\\n\\n dataShnarfHashes[currentDataHash] = shnarf;\\n\\n emit DataSubmitted(currentDataHash, \\_submissionData.firstBlockInData, \\_submissionData.finalBlockInData);\\n}\\n```\\n\\n```\\nfunction \\_calculateY(\\n bytes calldata \\_data,\\n bytes32 \\_compressedDataComputedX\\n) internal pure returns (bytes32 compressedDataComputedY) {\\n if (\\_data.length % 0x20 != 0) {\\n revert BytesLengthNotMultipleOf32();\\n }\\n\\n bytes4 errorSelector = ILineaRollup.FirstByteIsNotZero.selector;\\n assembly {\\n for {\\n let i := \\_data.length\\n } gt(i, 0) {\\n\\n } {\\n i := sub(i, 0x20)\\n let chunk := calldataload(add(\\_data.offset, i))\\n if iszero(iszero(and(chunk, 0xFF00000000000000000000000000000000000000000000000000000000000000))) {\\n let ptr := mload(0x40)\\n mstore(ptr, errorSelector)\\n revert(ptr, 0x4)\\n }\\n compressedDataComputedY := addmod(\\n mulmod(compressedDataComputedY, \\_compressedDataComputedX, Y\\_MODULUS),\\n chunk,\\n Y\\_MODULUS\\n )\\n }\\n }\\n}\\n```\\n | Add the compressed block data verification in the `submitData` function. | null | ```\\nshnarf = keccak256(\\n abi.encode(\\n shnarf,\\n _submissionData.snarkHash,\\n _submissionData.finalStateRootHash,\\n compressedDataComputedX,\\n _calculateY(_submissionData.compressedData, compressedDataComputedX)\\n )\\n ); \\n```\\n |
Empty Compressed Data Allowed in Data Submission | medium | In `submitData`, the coordinator can submit data with empty `compressedData` in `_submissionData`, which is not a desired purpose of this function and may cause undefined system behavior.\\n```\\nfunction submitData(\\n SubmissionData calldata \\_submissionData\\n)\\n external\\n whenTypeNotPaused(PROVING\\_SYSTEM\\_PAUSE\\_TYPE)\\n whenTypeNotPaused(GENERAL\\_PAUSE\\_TYPE)\\n onlyRole(OPERATOR\\_ROLE)\\n{\\n \\_submitData(\\_submissionData);\\n}\\n```\\n | Add a check to disallow data submission with empty `compressedData`. | null | ```\\nfunction submitData(\\n SubmissionData calldata \\_submissionData\\n)\\n external\\n whenTypeNotPaused(PROVING\\_SYSTEM\\_PAUSE\\_TYPE)\\n whenTypeNotPaused(GENERAL\\_PAUSE\\_TYPE)\\n onlyRole(OPERATOR\\_ROLE)\\n{\\n \\_submitData(\\_submissionData);\\n}\\n```\\n |
Limiting the Price in the buy and onTokenTransfer Functions | medium | When an investor tries to `buy` the tokens in the `Crowdinvesting` contract, the `buy` function does not allow to limit the amount of tokens that can be spent during this particular transaction:\\n```\\nfunction buy(uint256 \\_amount, address \\_tokenReceiver) public whenNotPaused nonReentrant {\\n // rounding up to the next whole number. Investor is charged up to one currency bit more in case of a fractional currency bit.\\n uint256 currencyAmount = Math.ceilDiv(\\_amount \\* getPrice(), 10 \\*\\* token.decimals());\\n```\\n\\nThe owner of the price oracle can front-run the transaction and twist the price.\\nOf course, the buyer can try to regulate that limit with the token allowance, but there may be some exceptions. Sometimes, users want to give more allowance and buy in multiple transactions over time. Or even give an infinite allowance (not recommended) out of convenience.\\nThe same issue can be found in the `onTokenTransfer` function. This function works differently because the amount of currency is fixed, and the amount of tokens minted is undefined. Because of that, limiting the allowance won't help, so the user doesn't know how many tokens can be bought. | It's recommended to explicitly limit the amount of tokens that can be transferred from the buyer for the `buy` function. And allow users to define a minimal amount of tokens bought in the `onTokenTransfer` function. | null | ```\\nfunction buy(uint256 \\_amount, address \\_tokenReceiver) public whenNotPaused nonReentrant {\\n // rounding up to the next whole number. Investor is charged up to one currency bit more in case of a fractional currency bit.\\n uint256 currencyAmount = Math.ceilDiv(\\_amount \\* getPrice(), 10 \\*\\* token.decimals());\\n```\\n |
Potential Re-Entrancy Attack in the Crowdinvesting Contract | low | The attack requires a set of pre-requisites:\\nThe currency token should have a re-entrancy opportunity inside the token transfer.\\nThe re-entrancy can be done on a token transfer from the `_msgSender()` to the `feeCollector`, so there are not a lot of attackers who can potentially execute it.\\nThe owner should be involved in the attack, so it's most likely an attack by the owner.\\n```\\nfunction buy(uint256 \\_amount, address \\_tokenReceiver) public whenNotPaused nonReentrant {\\n // rounding up to the next whole number. Investor is charged up to one currency bit more in case of a fractional currency bit.\\n uint256 currencyAmount = Math.ceilDiv(\\_amount \\* getPrice(), 10 \\*\\* token.decimals());\\n\\n (uint256 fee, address feeCollector) = \\_getFeeAndFeeReceiver(currencyAmount);\\n if (fee != 0) {\\n currency.safeTransferFrom(\\_msgSender(), feeCollector, fee);\\n }\\n\\n currency.safeTransferFrom(\\_msgSender(), currencyReceiver, currencyAmount - fee);\\n \\_checkAndDeliver(\\_amount, \\_tokenReceiver);\\n\\n emit TokensBought(\\_msgSender(), \\_amount, currencyAmount);\\n}\\n```\\n\\nSo on the token transfer to the `feeCollector` above, the `currency` parameter can be changed by the `owner`. And the following token transfer (currency.safeTransferFrom(_msgSender(), currencyReceiver, currencyAmount - fee);) will be made in a different `currency`.\\nA possible scenario of the attack could look as follows:\\nMalicious owner sells tokens for a valuable currency. People are placing allowance for the tokens.\\nThe owner changes the currency to a new one with a much lower price and re-entrancy during transfer.\\nWhen a victim wants to buy tokens, the owner reenters on fee transfer and returns the old currency.\\nThe victim transfers the updated currency that is more expensive. | Save the currency in memory at the beginning of the function and use it further. | null | ```\\nfunction buy(uint256 \\_amount, address \\_tokenReceiver) public whenNotPaused nonReentrant {\\n // rounding up to the next whole number. Investor is charged up to one currency bit more in case of a fractional currency bit.\\n uint256 currencyAmount = Math.ceilDiv(\\_amount \\* getPrice(), 10 \\*\\* token.decimals());\\n\\n (uint256 fee, address feeCollector) = \\_getFeeAndFeeReceiver(currencyAmount);\\n if (fee != 0) {\\n currency.safeTransferFrom(\\_msgSender(), feeCollector, fee);\\n }\\n\\n currency.safeTransferFrom(\\_msgSender(), currencyReceiver, currencyAmount - fee);\\n \\_checkAndDeliver(\\_amount, \\_tokenReceiver);\\n\\n emit TokensBought(\\_msgSender(), \\_amount, currencyAmount);\\n}\\n```\\n |
Lack of Validation of PrivateOffer Initialization Parameters | low | The `PrivateOffer` contract allows to create a customised deal for a specific investor. The `initialize()` function receives parameters to set up the `PrivateOffer` accordingly.\\nThe following parameters lack of validation during initialization:\\n`tokenAmount`\\n`token`\\n`currency`\\n`tokenAmount`\\n```\\nuint256 currencyAmount = Math.ceilDiv(\\n \\_arguments.tokenAmount \\* \\_arguments.tokenPrice,\\n 10 \\*\\* \\_arguments.token.decimals()\\n);\\n```\\n\\n`tokenAmount` is not validated at all. It should be verified to be greater than zero.\\n`token`\\n`token` is not validated at all. It should be verified to be different than zero address.\\n`currency`\\n`currency` is not validated at all. The documentation mentions a restricted list of supported currencies. It should be enforced by checking this parameter against a whitelist of `currency` addresses. | Enhance the validation of the following parameters: `tokenAmount`, `token`, `currency`. | null | ```\\nuint256 currencyAmount = Math.ceilDiv(\\n \\_arguments.tokenAmount \\* \\_arguments.tokenPrice,\\n 10 \\*\\* \\_arguments.token.decimals()\\n);\\n```\\n |
Lack of Validation of Crowdinvesting Initialization Parameters | low | The `Crowdinvesting` contract allows everyone who meets the requirements to buy tokens at a fixed price. The `initialize()` function receives parameters to set up the `Crowdinvesting` accordingly.\\nThe following parameters lack of validation during initialization:\\n`tokenPrice`\\n`minAmountPerBuyer`\\n`lastBuyDate`\\n`currency`\\n`tokenPrice`\\n```\\nrequire(\\_arguments.tokenPrice != 0, "\\_tokenPrice needs to be a non-zero amount");\\n```\\n\\n`tokenPrice` is checked to be different to zero. It should be verified to be in between `priceMin` and `priceMax` when these parameters are provided.\\n`minAmountPerBuyer`\\n```\\nrequire(\\n \\_arguments.minAmountPerBuyer <= \\_arguments.maxAmountPerBuyer,\\n "\\_minAmountPerBuyer needs to be smaller or equal to \\_maxAmountPerBuyer"\\n);\\n```\\n\\n`minAmountPerBuyer` is checked to be below or equal to `maxAmountPerBuyer`. It should be verified to not be zero.\\n`lastBuyDate`\\n```\\nlastBuyDate = \\_arguments.lastBuyDate;\\n```\\n\\n`lastBuyDate` is not validated at all. It should be verified to be greater than the current `block.timestamp`. Currently, a `Crowdinvesting` contract with `lastBuyDate` parameter set to a value (different than zero) below `block.timestamp` will not be able to sell any token.\\n```\\nfunction \\_checkAndDeliver(uint256 \\_amount, address \\_tokenReceiver) internal {\\n require(tokensSold + \\_amount <= maxAmountOfTokenToBeSold, "Not enough tokens to sell left");\\n require(tokensBought[\\_tokenReceiver] + \\_amount >= minAmountPerBuyer, "Buyer needs to buy at least minAmount");\\n require(\\n tokensBought[\\_tokenReceiver] + \\_amount <= maxAmountPerBuyer,\\n "Total amount of bought tokens needs to be lower than or equal to maxAmount"\\n );\\n\\n if (lastBuyDate != 0 && block.timestamp > lastBuyDate) {\\n revert("Last buy date has passed: not selling tokens anymore.");\\n }\\n\\n tokensSold += \\_amount;\\n tokensBought[\\_tokenReceiver] += \\_amount;\\n\\n token.mint(\\_tokenReceiver, \\_amount);\\n}\\n```\\n\\n`currency`\\n```\\nrequire(address(\\_arguments.currency) != address(0), "currency can not be zero address");\\n```\\n\\n`currency` is checked to be different than zero. The documentation mentions a restricted list of supported currencies. It should be enforced by checking this parameter against a whitelist of `currency` addresses. | Enhance the validation of the following parameters: `tokenPrice`, `tokenPrice`, `lastBuyDate`, `currency`. | null | ```\\nrequire(\\_arguments.tokenPrice != 0, "\\_tokenPrice needs to be a non-zero amount");\\n```\\n |
Missing Events on Important State Changes | medium | Throughout the code base, various important settings-related state changes are not surfaced by events.\\nIn RocketDAONodeTrusted:\\n```\\nfunction bootstrapMember(string memory _id, string memory _url, address _nodeAddress) override external onlyGuardian onlyBootstrapMode onlyRegisteredNode(_nodeAddress) onlyLatestContract("rocketDAONodeTrusted", address(this)) {\\n // Ok good to go, lets add them\\n RocketDAONodeTrustedProposalsInterface(getContractAddress("rocketDAONodeTrustedProposals")).proposalInvite(_id, _url, _nodeAddress);\\n}\\n\\n\\n// Bootstrap mode - Uint Setting\\nfunction bootstrapSettingUint(string memory _settingContractName, string memory _settingPath, uint256 _value) override external onlyGuardian onlyBootstrapMode onlyLatestContract("rocketDAONodeTrusted", address(this)) {\\n // Ok good to go, lets update the settings\\n RocketDAONodeTrustedProposalsInterface(getContractAddress("rocketDAONodeTrustedProposals")).proposalSettingUint(_settingContractName, _settingPath, _value);\\n}\\n\\n// Bootstrap mode - Bool Setting\\nfunction bootstrapSettingBool(string memory _settingContractName, string memory _settingPath, bool _value) override external onlyGuardian onlyBootstrapMode onlyLatestContract("rocketDAONodeTrusted", address(this)) {\\n // Ok good to go, lets update the settings\\n RocketDAONodeTrustedProposalsInterface(getContractAddress("rocketDAONodeTrustedProposals")).proposalSettingBool(_settingContractName, _settingPath, _value);\\n}\\n```\\n\\nIn RocketDAOProtocol:\\n```\\nfunction bootstrapSettingMulti(string[] memory _settingContractNames, string[] memory _settingPaths, SettingType[] memory _types, bytes[] memory _values) override external onlyGuardian onlyBootstrapMode onlyLatestContract("rocketDAOProtocol", address(this)) {\\n // Ok good to go, lets update the settings\\n RocketDAOProtocolProposalsInterface(getContractAddress("rocketDAOProtocolProposals")).proposalSettingMulti(_settingContractNames, _settingPaths, _types, _values);\\n}\\n\\n/// @notice Bootstrap mode - Uint Setting\\nfunction bootstrapSettingUint(string memory _settingContractName, string memory _settingPath, uint256 _value) override external onlyGuardian onlyBootstrapMode onlyLatestContract("rocketDAOProtocol", address(this)) {\\n // Ok good to go, lets update the settings\\n RocketDAOProtocolProposalsInterface(getContractAddress("rocketDAOProtocolProposals")).proposalSettingUint(_settingContractName, _settingPath, _value);\\n}\\n```\\n\\nTreasury address setter:\\n```\\nfunction bootstrapTreasuryNewContract(string memory _contractName, address _recipientAddress, uint256 _amountPerPeriod, uint256 _periodLength, uint256 _startTime, uint256 _numPeriods) override external onlyGuardian onlyBootstrapMode onlyLatestContract("rocketDAOProtocol", address(this)) {\\n RocketDAOProtocolProposalsInterface(getContractAddress("rocketDAOProtocolProposals")).proposalTreasuryNewContract(_contractName, _recipientAddress, _amountPerPeriod, _periodLength, _startTime, _numPeriods);\\n}\\n```\\n\\nBootstrap mode management:\\n```\\nfunction bootstrapDisable(bool _confirmDisableBootstrapMode) override external onlyGuardian onlyBootstrapMode onlyLatestContract("rocketDAOProtocol", address(this)) {\\n require(_confirmDisableBootstrapMode == true, "You must confirm disabling bootstrap mode, it can only be done once!");\\n setBool(keccak256(abi.encodePacked(daoNameSpace, "bootstrapmode.disabled")), true);\\n}\\n```\\n\\nOne-time treasury spends:\\n```\\nfunction bootstrapSpendTreasury(string memory _invoiceID, address _recipientAddress, uint256 _amount) override external onlyGuardian onlyBootstrapMode onlyLatestContract("rocketDAOProtocol", address(this)) {\\n RocketDAOProtocolProposalsInterface(getContractAddress("rocketDAOProtocolProposals")).proposalTreasuryOneTimeSpend(_invoiceID, _recipientAddress, _amount);\\n}\\n```\\n\\n```\\nfunction setDelegate(address _newDelegate) external override onlyRegisteredNode(msg.sender) {\\n```\\n\\n```\\nfunction proposalSettingUint(string memory _settingNameSpace, string memory _settingPath, uint256 _value) override public onlyExecutingContracts() onlyValidSetting(_settingNameSpace, _settingPath) {\\n bytes32 namespace = keccak256(abi.encodePacked(protocolDaoSettingNamespace, _settingNameSpace));\\n```\\n\\n```\\nfunction proposalSettingBool(string memory _settingNameSpace, string memory _settingPath, bool _value) override public onlyExecutingContracts() onlyValidSetting(_settingNameSpace, _settingPath) {\\n bytes32 namespace = keccak256(abi.encodePacked(protocolDaoSettingNamespace, _settingNameSpace));\\n```\\n\\n```\\nfunction proposalSettingAddress(string memory _settingNameSpace, string memory _settingPath, address _value) override public onlyExecutingContracts() onlyValidSetting(_settingNameSpace, _settingPath) {\\n bytes32 namespace = keccak256(abi.encodePacked(protocolDaoSettingNamespace, _settingNameSpace));\\n```\\n\\n```\\nfunction proposalInvite(string calldata _id, address _memberAddress) override public onlyLatestContract("rocketDAOProtocolProposals", msg.sender) {\\n // Their proposal executed, record the block\\n```\\n | Resolution\\nThe client implemented a fix in commit `1be41a88a40125baf58d8904770cd9eb9e0732bb` and provided the following statement:\\nRocketDAONodeTrusted is not a contract that is getting upgrade so this won't be fixed\\nRocketDAOProtocol has been updated to include events for each bootstrap function\\nRocketNetworkVoting has been updated to emit an event\\nRocketDAOSecurityProposals has been updated to emit events for all proposals\\nWe recommend emitting events on state changes, particularly when these are performed by an authorized party. The implementation of the recommendation should be analogous to the handling of events on state changes in the rest of the system, such as in the `RocketMinipoolPenalty` contract:\\n```\\nfunction setMaxPenaltyRate(uint256 _rate) external override onlyGuardian {\\n // Update rate\\n maxPenaltyRate = _rate;\\n // Emit event\\n emit MaxPenaltyRateUpdated(_rate, block.timestamp);\\n}\\n```\\n | null | ```\\nfunction bootstrapMember(string memory _id, string memory _url, address _nodeAddress) override external onlyGuardian onlyBootstrapMode onlyRegisteredNode(_nodeAddress) onlyLatestContract("rocketDAONodeTrusted", address(this)) {\\n // Ok good to go, lets add them\\n RocketDAONodeTrustedProposalsInterface(getContractAddress("rocketDAONodeTrustedProposals")).proposalInvite(_id, _url, _nodeAddress);\\n}\\n\\n\\n// Bootstrap mode - Uint Setting\\nfunction bootstrapSettingUint(string memory _settingContractName, string memory _settingPath, uint256 _value) override external onlyGuardian onlyBootstrapMode onlyLatestContract("rocketDAONodeTrusted", address(this)) {\\n // Ok good to go, lets update the settings\\n RocketDAONodeTrustedProposalsInterface(getContractAddress("rocketDAONodeTrustedProposals")).proposalSettingUint(_settingContractName, _settingPath, _value);\\n}\\n\\n// Bootstrap mode - Bool Setting\\nfunction bootstrapSettingBool(string memory _settingContractName, string memory _settingPath, bool _value) override external onlyGuardian onlyBootstrapMode onlyLatestContract("rocketDAONodeTrusted", address(this)) {\\n // Ok good to go, lets update the settings\\n RocketDAONodeTrustedProposalsInterface(getContractAddress("rocketDAONodeTrustedProposals")).proposalSettingBool(_settingContractName, _settingPath, _value);\\n}\\n```\\n |
RocketDAOProtocolProposal._propose() Should Revert if _blockNumber > block.number | medium | Currently, the `RocketDAOProtocolProposal._propose()` function does not account for scenarios where `_blockNumber` is greater than `block.number`. This is a critical oversight, as voting power cannot be determined for future block numbers.\\n```\\nfunction _propose(string memory _proposalMessage, uint256 _blockNumber, uint256 _totalVotingPower, bytes calldata _payload) internal returns (uint256) {\\n```\\n | We recommend updating the function to revert on transactions where `_blockNumber` exceeds `block.number`. This will prevent the creation of proposals with undefined voting power and maintain the integrity of the voting process. | null | ```\\nfunction _propose(string memory _proposalMessage, uint256 _blockNumber, uint256 _totalVotingPower, bytes calldata _payload) internal returns (uint256) {\\n```\\n |
Unused Parameter and Improper Parameter Sanitization in RocketNetworkVoting.calculateVotingPower() | low | The `matchedETH` parameter in `RocketNetworkVoting.calculateVotingPower()` is unused.\\n```\\n// Get contracts\\nRocketDAOProtocolSettingsNodeInterface rocketDAOProtocolSettingsNode = RocketDAOProtocolSettingsNodeInterface(getContractAddress("rocketDAOProtocolSettingsNode"));\\n```\\n\\nAdditionally, the `_block` parameter is not sanitized. Thus, if calling the function with a block number `_block` where `_block >= block.number`, the call will revert because of a division-by-zero error. Indeed, `rocketNetworkSnapshots.lookupRecent` will return a `rplPrice` of zero since the checkpoint does not exist. Consequently, the function `calculateVotingPower` will revert when computing the `maximumStake`.\\n```\\nkey = keccak256(abi.encodePacked("rpl.staked.node.amount", _nodeAddress));\\nuint256 rplStake = uint256(rocketNetworkSnapshots.lookupRecent(key, uint32(_block), 5));\\n\\nreturn calculateVotingPower(rplStake, ethMatched, ethProvided, rplPrice);\\n```\\n\\n```\\nuint256 maximumStake = providedETH * maximumStakePercent / rplPrice;\\n```\\n | We recommend removing the unused parameter to enhance code clarity. The presence of unused parameters can lead to potential confusion for future developers. Additionally, we recommend ensuring that the snapshotted `rplPrice` value exists before it is used to compute the `maximumStake` value. | null | ```\\n// Get contracts\\nRocketDAOProtocolSettingsNodeInterface rocketDAOProtocolSettingsNode = RocketDAOProtocolSettingsNodeInterface(getContractAddress("rocketDAOProtocolSettingsNode"));\\n```\\n |
Wrong/Misleading NatSpec Documentation | low | The NatSpec documentation in several parts of the code base contains inaccuracies or is misleading. This issue can lead to misunderstandings about how the code functions, especially for developers who rely on these comments for clarity and guidance.\\nIn `RocketDAOProtocolProposal`, the NatSpec comments are potentially misleading:\\n```\\n/// @notice Get the votes against count of this proposal\\n/// @param _proposalID The ID of the proposal to query\\n```\\n\\n```\\n/// @notice Returns true if this proposal was supported by this node\\n/// @param _proposalID The ID of the proposal to query\\n/// @param _nodeAddress The node operator address to query\\nfunction getReceiptDirection(uint256 _proposalID, address _nodeAddress) override public view returns (VoteDirection) {\\n return VoteDirection(getUint(keccak256(abi.encodePacked(daoProposalNameSpace, "receipt.direction", _proposalID, _nodeAddress))));\\n}\\n```\\n\\nIn RocketDAOProtocolVerifier, the NatSpec documentation is incomplete, which might leave out critical information about the function's purpose and behavior:\\n```\\n/// @notice Used by a verifier to challenge a specific index of a proposal's voting power tree\\n/// @param _proposalID The ID of the proposal being challenged\\n/// @param _index The global index of the node being challenged\\n```\\n | The NatSpec documentation should be thoroughly reviewed and corrected where necessary. We recommend ensuring it accurately reflects the code's functionality and provides complete information. | null | ```\\n/// @notice Get the votes against count of this proposal\\n/// @param _proposalID The ID of the proposal to query\\n```\\n |
RocketDAOProtocolSettingsRewards.setSettingRewardClaimPeriods() Cannot Be Invoked | low | ```\\nsetUint(keccak256(abi.encodePacked(settingNameSpace, "rewards.claims", "periods")), _periods);\\n```\\n | To make this function useful and align it with its intended purpose, we recommend integrating its functionality into `RocketDAOProtocolProposals`. In addition, we recommend that this function emit an event upon successful change of settings, enhancing the transparency of the operation. | null | ```\\nsetUint(keccak256(abi.encodePacked(settingNameSpace, "rewards.claims", "periods")), _periods);\\n```\\n |
No Protection of Uninitialized Implementation Contracts From Attacker | medium | In the contracts implement Openzeppelin's UUPS model, uninitialized implementation contract can be taken over by an attacker with `initialize` function, it's recommended to invoke the `_disableInitializers` function in the constructor to prevent the implementation contract from being used by the attacker. However all the contracts which implements `OwnablePausableUpgradeable` do not call `_disableInitializers` in the constructors\\n```\\ncontract Rewards is IRewards, OwnablePausableUpgradeable, ReentrancyGuardUpgradeable {\\n```\\n\\n```\\ncontract Pool is IPool, OwnablePausableUpgradeable, ReentrancyGuardUpgradeable {\\n```\\n\\n```\\ncontract StakedLyxToken is OwnablePausableUpgradeable, LSP4DigitalAssetMetadataInitAbstract, IStakedLyxToken, ReentrancyGuardUpgradeable {\\n```\\n\\netc. | Invoke `_disableInitializers` in the constructors of contracts which implement `OwnablePausableUpgradeable` including following:\\n```\\nPool\\nPoolValidators\\nFeeEscrow\\nReward\\nStakeLyxTokem\\nOracles \\nMerkleDistributor\\n```\\n | null | ```\\ncontract Rewards is IRewards, OwnablePausableUpgradeable, ReentrancyGuardUpgradeable {\\n```\\n |
Unsafe Function receiveFees Acknowledged | low | In the Pool contract, function `receiveFees` is used for compensate a potential penalty/slashing in the protocol by sending LYX back to the pool without minting sLYX, but the side effect is that anyone can send LYX to the pool which could mess up pool balance after all validator exited, in fact it can be replaced by a another function `receiveWithoutActivation` with access control which does the same thing.\\n```\\nfunction receiveFees() external payable override {}\\n```\\n\\n```\\nfunction receiveWithoutActivation() external payable override {\\n require(msg.sender == address(stakedLyxToken) || hasRole(DEFAULT\\_ADMIN\\_ROLE, msg.sender), "Pool: access denied");\\n}\\n```\\n | Remove function `receiveFees` | null | ```\\nfunction receiveFees() external payable override {}\\n```\\n |
Unnecessary Matching in Unstake Process | low | Function `unstakeProcessed` in `StakedLyxToken` contract, when `unstakeAmount > totalPendingUnstake`, all the unstake requests should be able to be processed, thus no need to go through the matching, as a result, extra gas in the matching can be saved.\\n```\\nif (unstakeAmount > totalPendingUnstake) {\\n pool.receiveWithoutActivation{value: unstakeAmount - totalPendingUnstake}();\\n unstakeAmount = totalPendingUnstake;\\n}\\n\\ntotalPendingUnstake -= unstakeAmount;\\ntotalUnstaked += unstakeAmount;\\nuint256 amountToFill = unstakeAmount;\\n\\nfor (uint256 i = unstakeRequestCurrentIndex; i <= unstakeRequestCount; i++) {\\n UnstakeRequest storage request = \\_unstakeRequests[i];\\n if (amountToFill > (request.amount - request.amountFilled)) {\\n amountToFill -= (request.amount - request.amountFilled);\\n continue;\\n } else {\\n if (amountToFill == (request.amount - request.amountFilled) && i < unstakeRequestCount) {\\n unstakeRequestCurrentIndex = i + 1;\\n } else {\\n request.amountFilled += uint128(amountToFill);\\n unstakeRequestCurrentIndex = i;\\n }\\n break;\\n }\\n}\\n```\\n | Put the matching part (line 393-411) into else branch of `if unstakeAmount > totalPendingUnstake`, change the if branch into following:\\n```\\nif (unstakeAmount > totalPendingUnstake) {\\n pool.receiveWithoutActivation{value: unstakeAmount - totalPendingUnstake}();\\n unstakeAmount = totalPendingUnstake;\\n totalPendingUnstake = 0;\\n unstakeRequestCurrentIndex = unstakeRequestCount;\\n _unstakeRequests[unstakeRequestCount].amountFilled = _unstakeRequests[unstakeRequestCount].amount;\\n } \\n```\\n | null | ```\\nif (unstakeAmount > totalPendingUnstake) {\\n pool.receiveWithoutActivation{value: unstakeAmount - totalPendingUnstake}();\\n unstakeAmount = totalPendingUnstake;\\n}\\n\\ntotalPendingUnstake -= unstakeAmount;\\ntotalUnstaked += unstakeAmount;\\nuint256 amountToFill = unstakeAmount;\\n\\nfor (uint256 i = unstakeRequestCurrentIndex; i <= unstakeRequestCount; i++) {\\n UnstakeRequest storage request = \\_unstakeRequests[i];\\n if (amountToFill > (request.amount - request.amountFilled)) {\\n amountToFill -= (request.amount - request.amountFilled);\\n continue;\\n } else {\\n if (amountToFill == (request.amount - request.amountFilled) && i < unstakeRequestCount) {\\n unstakeRequestCurrentIndex = i + 1;\\n } else {\\n request.amountFilled += uint128(amountToFill);\\n unstakeRequestCurrentIndex = i;\\n }\\n break;\\n }\\n}\\n```\\n |
No Protection of Uninitialized Implementation Contracts From Attacker | medium | In the contracts implement Openzeppelin's UUPS model, uninitialized implementation contract can be taken over by an attacker with `initialize` function, it's recommended to invoke the `_disableInitializers` function in the constructor to prevent the implementation contract from being used by the attacker. However all the contracts which implements `OwnablePausableUpgradeable` do not call `_disableInitializers` in the constructors\\n```\\ncontract Rewards is IRewards, OwnablePausableUpgradeable, ReentrancyGuardUpgradeable {\\n```\\n\\n```\\ncontract Pool is IPool, OwnablePausableUpgradeable, ReentrancyGuardUpgradeable {\\n```\\n\\n```\\ncontract StakedLyxToken is OwnablePausableUpgradeable, LSP4DigitalAssetMetadataInitAbstract, IStakedLyxToken, ReentrancyGuardUpgradeable {\\n```\\n\\netc. | Invoke `_disableInitializers` in the constructors of contracts which implement `OwnablePausableUpgradeable` including following:\\n```\\nPool\\nPoolValidators\\nFeeEscrow\\nReward\\nStakeLyxTokem\\nOracles \\nMerkleDistributor\\n```\\n | null | ```\\ncontract Rewards is IRewards, OwnablePausableUpgradeable, ReentrancyGuardUpgradeable {\\n```\\n |
Unnecessary Matching in Unstake Process | low | Function `unstakeProcessed` in `StakedLyxToken` contract, when `unstakeAmount > totalPendingUnstake`, all the unstake requests should be able to be processed, thus no need to go through the matching, as a result, extra gas in the matching can be saved.\\n```\\nif (unstakeAmount > totalPendingUnstake) {\\n pool.receiveWithoutActivation{value: unstakeAmount - totalPendingUnstake}();\\n unstakeAmount = totalPendingUnstake;\\n}\\n\\ntotalPendingUnstake -= unstakeAmount;\\ntotalUnstaked += unstakeAmount;\\nuint256 amountToFill = unstakeAmount;\\n\\nfor (uint256 i = unstakeRequestCurrentIndex; i <= unstakeRequestCount; i++) {\\n UnstakeRequest storage request = \\_unstakeRequests[i];\\n if (amountToFill > (request.amount - request.amountFilled)) {\\n amountToFill -= (request.amount - request.amountFilled);\\n continue;\\n } else {\\n if (amountToFill == (request.amount - request.amountFilled) && i < unstakeRequestCount) {\\n unstakeRequestCurrentIndex = i + 1;\\n } else {\\n request.amountFilled += uint128(amountToFill);\\n unstakeRequestCurrentIndex = i;\\n }\\n break;\\n }\\n}\\n```\\n | Put the matching part (line 393-411) into else branch of `if unstakeAmount > totalPendingUnstake`, change the if branch into following:\\n```\\nif (unstakeAmount > totalPendingUnstake) {\\n pool.receiveWithoutActivation{value: unstakeAmount - totalPendingUnstake}();\\n unstakeAmount = totalPendingUnstake;\\n totalPendingUnstake = 0;\\n unstakeRequestCurrentIndex = unstakeRequestCount;\\n _unstakeRequests[unstakeRequestCount].amountFilled = _unstakeRequests[unstakeRequestCount].amount;\\n } \\n```\\n | null | ```\\nif (unstakeAmount > totalPendingUnstake) {\\n pool.receiveWithoutActivation{value: unstakeAmount - totalPendingUnstake}();\\n unstakeAmount = totalPendingUnstake;\\n}\\n\\ntotalPendingUnstake -= unstakeAmount;\\ntotalUnstaked += unstakeAmount;\\nuint256 amountToFill = unstakeAmount;\\n\\nfor (uint256 i = unstakeRequestCurrentIndex; i <= unstakeRequestCount; i++) {\\n UnstakeRequest storage request = \\_unstakeRequests[i];\\n if (amountToFill > (request.amount - request.amountFilled)) {\\n amountToFill -= (request.amount - request.amountFilled);\\n continue;\\n } else {\\n if (amountToFill == (request.amount - request.amountFilled) && i < unstakeRequestCount) {\\n unstakeRequestCurrentIndex = i + 1;\\n } else {\\n request.amountFilled += uint128(amountToFill);\\n unstakeRequestCurrentIndex = i;\\n }\\n break;\\n }\\n}\\n```\\n |
Re-Entrancy Risks Associated With External Calls With Other Liquid Staking Systems. | high | As part of the strategy to integrate with Liquid Staking tokens for Ethereum staking, the Lybra Protocol vaults are required to make external calls to Liquid Staking systems.\\nFor example, the `depositEtherToMint` function in the vaults makes external calls to deposit Ether and receive the LSD tokens back. While external calls to untrusted third-party contracts may be dangerous, in this case, the Lybra Protocol already extends trust assumptions to these third parties simply through the act of accepting their tokens as collateral. Indeed, in some cases the contract addresses are even hardcoded into the contract and called directly instead of relying on some registry:\\n```\\ncontract LybraWstETHVault is LybraPeUSDVaultBase {\\n Ilido immutable lido;\\n //WstETH = 0x7f39C581F595B53c5cb19bD0b3f8dA6c935E2Ca0;\\n //Lido = 0xae7ab96520DE3A18E5e111B5EaAb095312D7fE84;\\n constructor(address \\_lido, address \\_asset, address \\_oracle, address \\_config) LybraPeUSDVaultBase(\\_asset, \\_oracle, \\_config) {\\n lido = Ilido(\\_lido);\\n }\\n\\n function depositEtherToMint(uint256 mintAmount) external payable override {\\n require(msg.value >= 1 ether, "DNL");\\n uint256 sharesAmount = lido.submit{value: msg.value}(address(configurator));\\n require(sharesAmount != 0, "ZERO\\_DEPOSIT");\\n lido.approve(address(collateralAsset), msg.value);\\n uint256 wstETHAmount = IWstETH(address(collateralAsset)).wrap(msg.value);\\n depositedAsset[msg.sender] += wstETHAmount;\\n if (mintAmount > 0) {\\n \\_mintPeUSD(msg.sender, msg.sender, mintAmount, getAssetPrice());\\n }\\n emit DepositEther(msg.sender, address(collateralAsset), msg.value,wstETHAmount, block.timestamp);\\n }\\n```\\n\\nIn that case, depending on the contract, it may be known what contract is being called, and the risk may be assessed as far as what logic may be executed.\\nHowever, in the cases of `BETH` and `rETH`, the calls are being made into a proxy and a contract registry of a DAO (RocketPool's DAO) respectively.\\n```\\ncontract LybraWBETHVault is LybraPeUSDVaultBase {\\n //WBETH = 0xa2e3356610840701bdf5611a53974510ae27e2e1\\n constructor(address \\_asset, address \\_oracle, address \\_config)\\n LybraPeUSDVaultBase(\\_asset, \\_oracle, \\_config) {}\\n\\n function depositEtherToMint(uint256 mintAmount) external payable override {\\n require(msg.value >= 1 ether, "DNL");\\n uint256 preBalance = collateralAsset.balanceOf(address(this));\\n IWBETH(address(collateralAsset)).deposit{value: msg.value}(address(configurator));\\n uint256 balance = collateralAsset.balanceOf(address(this));\\n depositedAsset[msg.sender] += balance - preBalance;\\n\\n if (mintAmount > 0) {\\n \\_mintPeUSD(msg.sender, msg.sender, mintAmount, getAssetPrice());\\n }\\n\\n emit DepositEther(msg.sender, address(collateralAsset), msg.value,balance - preBalance, block.timestamp);\\n }\\n```\\n\\n```\\nconstructor(address \\_rocketStorageAddress, address \\_rETH, address \\_oracle, address \\_config)\\n LybraPeUSDVaultBase(\\_rETH, \\_oracle, \\_config) {\\n rocketStorage = IRocketStorageInterface(\\_rocketStorageAddress);\\n}\\n\\nfunction depositEtherToMint(uint256 mintAmount) external payable override {\\n require(msg.value >= 1 ether, "DNL");\\n uint256 preBalance = collateralAsset.balanceOf(address(this));\\n IRocketDepositPool(rocketStorage.getAddress(keccak256(abi.encodePacked("contract.address", "rocketDepositPool")))).deposit{value: msg.value}();\\n uint256 balance = collateralAsset.balanceOf(address(this));\\n depositedAsset[msg.sender] += balance - preBalance;\\n\\n if (mintAmount > 0) {\\n \\_mintPeUSD(msg.sender, msg.sender, mintAmount, getAssetPrice());\\n }\\n\\n emit DepositEther(msg.sender, address(collateralAsset), msg.value,balance - preBalance, block.timestamp);\\n}\\n```\\n\\nAs a result, it is impossible to make any guarantees for what logic will be executed during the external calls. Namely, reentrancy risks can't be ruled out, and the damage could be critical to the system. While the trust in these parties isn't in question, it would be best practice to avoid any additional reentrancy risks by placing reentrancy guards. Indeed, in the `LybraRETHVault` and `LybraWbETHVault` contracts, one can see the possible damage as the calls are surrounded in a `preBalance <-> balance` pattern.\\nThe whole of third party Liquid Staking systems' operations need not be compromised, only these particular parts would be enough to cause critical damage to the Lybra Protocol. | After conversations with the Lybra Finance team, it has been assessed that reentrancy guards are appropriate in this scenario to avoid any potential reentrancy risk, which is exactly the recommendation this audit team would provide. | null | ```\\ncontract LybraWstETHVault is LybraPeUSDVaultBase {\\n Ilido immutable lido;\\n //WstETH = 0x7f39C581F595B53c5cb19bD0b3f8dA6c935E2Ca0;\\n //Lido = 0xae7ab96520DE3A18E5e111B5EaAb095312D7fE84;\\n constructor(address \\_lido, address \\_asset, address \\_oracle, address \\_config) LybraPeUSDVaultBase(\\_asset, \\_oracle, \\_config) {\\n lido = Ilido(\\_lido);\\n }\\n\\n function depositEtherToMint(uint256 mintAmount) external payable override {\\n require(msg.value >= 1 ether, "DNL");\\n uint256 sharesAmount = lido.submit{value: msg.value}(address(configurator));\\n require(sharesAmount != 0, "ZERO\\_DEPOSIT");\\n lido.approve(address(collateralAsset), msg.value);\\n uint256 wstETHAmount = IWstETH(address(collateralAsset)).wrap(msg.value);\\n depositedAsset[msg.sender] += wstETHAmount;\\n if (mintAmount > 0) {\\n \\_mintPeUSD(msg.sender, msg.sender, mintAmount, getAssetPrice());\\n }\\n emit DepositEther(msg.sender, address(collateralAsset), msg.value,wstETHAmount, block.timestamp);\\n }\\n```\\n |
The Deployer of GovernanceTimelock Gets Privileged Access to the System. | high | The `GovernanceTimelock` contract is responsible for Roles Based Access Control management and checks in the Lybra Protocol. It offers two functions specifically that check if an address has the required role - `checkRole` and checkOnlyRole:\\n```\\nfunction checkRole(bytes32 role, address \\_sender) public view returns(bool){\\n return hasRole(role, \\_sender) || hasRole(DAO, \\_sender);\\n}\\n\\nfunction checkOnlyRole(bytes32 role, address \\_sender) public view returns(bool){\\n return hasRole(role, \\_sender);\\n}\\n```\\n\\nIn `checkRole`, the contract also lets an address with the role `DAO` bypass the check altogether, making it a powerful role.\\nFor initial role management, when the `GovernanceTimelock` contract gets deployed, its constructor logic initializes a few roles, assigns relevant admin roles, and, notably, assigns the `DAO` role to the contract, and the `DAO` and the `GOV` role to the deployer.\\n```\\nconstructor(uint256 minDelay, address[] memory proposers, address[] memory executors, address admin) TimelockController(minDelay, proposers, executors, admin) {\\n \\n \\_setRoleAdmin(DAO, GOV);\\n \\_setRoleAdmin(TIMELOCK, GOV);\\n \\_setRoleAdmin(ADMIN, GOV);\\n \\_grantRole(DAO, address(this));\\n \\_grantRole(DAO, msg.sender);\\n \\_grantRole(GOV, msg.sender);\\n}\\n```\\n\\nThe assignment of such powerful roles to a single private key with the deployer has inherent risks. Specifically in our case, the `DAO` role alone as we saw may bypass many checks within the Lybra Protocol, and the `GOV` role even has role management privileges.\\nHowever, it does make sense to assign such roles at the beginning of the deployment to finish initialization and assign the rest of the roles. One could argue that having access to the `DAO` role in the early stages of the system's life could allow for quick disaster recovery in the event of incidents as well. Though, it is still dangerous to hold privileges for such a system in a single address as we have seen over the last years in security incidents that have to do with compromised keys. | While redesigning the deployment process to account for a lesser-privileged deployer would be ideal, the Lybra Finance team should at least transfer ownership as soon as the deployment is complete to minimize compromised private key risk. | null | ```\\nfunction checkRole(bytes32 role, address \\_sender) public view returns(bool){\\n return hasRole(role, \\_sender) || hasRole(DAO, \\_sender);\\n}\\n\\nfunction checkOnlyRole(bytes32 role, address \\_sender) public view returns(bool){\\n return hasRole(role, \\_sender);\\n}\\n```\\n |
The configurator.getEUSDMaxLocked() Condition Can Be Bypassed During a Flashloan | medium | When converting `EUSD` tokens to `peUSD`, there is a check that limits the total amount of `EUSD` that can be converted:\\n```\\nfunction convertToPeUSD(address user, uint256 eusdAmount) public {\\n require(\\_msgSender() == user || \\_msgSender() == address(this), "MDM");\\n require(eusdAmount != 0, "ZA");\\n require(EUSD.balanceOf(address(this)) + eusdAmount <= configurator.getEUSDMaxLocked(),"ESL");\\n```\\n\\nThe issue is that there is a way to bypass this restriction. An attacker can get a flash loan (in EUSD) from this contract, essentially reducing the visible amount of locked tokens (EUSD.balanceOf(address(this))). | Multiple approaches can solve this issue. One would be adding reentrancy protection. Another one could be keeping track of the borrowed amount for a flashloan. | null | ```\\nfunction convertToPeUSD(address user, uint256 eusdAmount) public {\\n require(\\_msgSender() == user || \\_msgSender() == address(this), "MDM");\\n require(eusdAmount != 0, "ZA");\\n require(EUSD.balanceOf(address(this)) + eusdAmount <= configurator.getEUSDMaxLocked(),"ESL");\\n```\\n |
Liquidation Keepers Automatically Become eUSD Debt Providers for Other Liquidations. | medium | One of the most important mechanisms in the Lybra Protocol is the liquidation of poorly collateralized vaults. For example, if a vault is found to have a collateralization ratio that is too small, a liquidator may provide debt tokens to the protocol and retrieve the vault collateral at a discount:\\n```\\nfunction liquidation(address provider, address onBehalfOf, uint256 assetAmount) external virtual {\\n uint256 assetPrice = getAssetPrice();\\n uint256 onBehalfOfCollateralRatio = (depositedAsset[onBehalfOf] \\* assetPrice \\* 100) / borrowed[onBehalfOf];\\n require(onBehalfOfCollateralRatio < badCollateralRatio, "Borrowers collateral ratio should below badCollateralRatio");\\n\\n require(assetAmount \\* 2 <= depositedAsset[onBehalfOf], "a max of 50% collateral can be liquidated");\\n require(EUSD.allowance(provider, address(this)) != 0, "provider should authorize to provide liquidation EUSD");\\n uint256 eusdAmount = (assetAmount \\* assetPrice) / 1e18;\\n\\n \\_repay(provider, onBehalfOf, eusdAmount);\\n uint256 reducedAsset = assetAmount \\* 11 / 10;\\n totalDepositedAsset -= reducedAsset;\\n depositedAsset[onBehalfOf] -= reducedAsset;\\n uint256 reward2keeper;\\n if (provider == msg.sender) {\\n collateralAsset.safeTransfer(msg.sender, reducedAsset);\\n } else {\\n reward2keeper = (reducedAsset \\* configurator.vaultKeeperRatio(address(this))) / 110;\\n collateralAsset.safeTransfer(provider, reducedAsset - reward2keeper);\\n collateralAsset.safeTransfer(msg.sender, reward2keeper);\\n }\\n emit LiquidationRecord(provider, msg.sender, onBehalfOf, eusdAmount, reducedAsset, reward2keeper, false, block.timestamp);\\n}\\n```\\n\\nTo liquidate the vault, the liquidator needs to transfer debt tokens from the provider address, which in turn needs to have had approved allowance of the token for the vault:\\n```\\nrequire(EUSD.allowance(provider, address(this)) != 0, "provider should authorize to provide liquidation EUSD");\\n```\\n\\nThe allowance doesn't need to be large, it only needs to be non-zero. While it is true that in the `superLiquidation` function the allowance check is for `eusdAmount`, which is the amount associated with `assetAmount` (the requested amount of collateral to be liquidated), the liquidator could simply call the maximum of the allowance the provider has given to the vault and then repeat the liquidation process. The allowance does not actually decrease throughout the liquidation process.\\n```\\nrequire(EUSD.allowance(provider, address(this)) >= eusdAmount, "provider should authorize to provide liquidation EUSD");\\n```\\n\\nNotably, this address doesn't have to be the same one as the liquidator. In fact, there are no checks on whether the liquidator has an agreement or allowance from the provider to use their tokens in this particular vault's liquidation. The contract only checks to see if the provider has `EUSD` allowance for the vault, and how to split the rewards if the provider is different from the liquidator:\\n```\\nif (provider == msg.sender) {\\n collateralAsset.safeTransfer(msg.sender, reducedAsset);\\n} else {\\n reward2keeper = (reducedAsset \\* configurator.vaultKeeperRatio(address(this))) / 110;\\n collateralAsset.safeTransfer(provider, reducedAsset - reward2keeper);\\n collateralAsset.safeTransfer(msg.sender, reward2keeper);\\n}\\n```\\n\\nIn fact, this is a design choice of the system to treat the allowance to the vault as an agreement to become a public provider of debt tokens for the liquidation process. It is important to note that there are incentives associated with being a provider as they get the collateral asset at a discount.\\nHowever, it is not obvious from documentation at the time of the audit nor the code that an address having a non-zero `EUSD` allowance for the vault automatically allows other users to use that address as a provider. Indeed, many general-purpose liquidator bots use their tokens during liquidations, using the same address for both the liquidator and the provider. As a result, this would put that address at the behest of any other user who would want to utilize these tokens in liquidations. The user might not be comfortable doing this trade in any case, even at a discount.\\nIn fact, due to this mechanism, even during consciously initiated liquidations MEV bots could spot this opportunity and front-run the liquidator's transaction. A frontrunner could put themselves as the keeper and the original user as the provider, grabbing the `reward2keeper` fee and leaving the original address with fewer rewards and failed gas after the liquidation. | While the mechanism is understood to be done for convenience and access to liquidity as a design decision, this could put unaware users in unfortunate situations of having performed a trade without explicit consent. Specifically, the MEV attack vector could be executed and repeated without fail by a capable actor monitoring the mempool. Consider having a separate, explicit flag for allowing others to use a user's tokens during liquidation, thus also accommodating solo liquidators by removing the MEV attack vector. Consider explicitly mentioning these mechanisms in the documentation as well. | null | ```\\nfunction liquidation(address provider, address onBehalfOf, uint256 assetAmount) external virtual {\\n uint256 assetPrice = getAssetPrice();\\n uint256 onBehalfOfCollateralRatio = (depositedAsset[onBehalfOf] \\* assetPrice \\* 100) / borrowed[onBehalfOf];\\n require(onBehalfOfCollateralRatio < badCollateralRatio, "Borrowers collateral ratio should below badCollateralRatio");\\n\\n require(assetAmount \\* 2 <= depositedAsset[onBehalfOf], "a max of 50% collateral can be liquidated");\\n require(EUSD.allowance(provider, address(this)) != 0, "provider should authorize to provide liquidation EUSD");\\n uint256 eusdAmount = (assetAmount \\* assetPrice) / 1e18;\\n\\n \\_repay(provider, onBehalfOf, eusdAmount);\\n uint256 reducedAsset = assetAmount \\* 11 / 10;\\n totalDepositedAsset -= reducedAsset;\\n depositedAsset[onBehalfOf] -= reducedAsset;\\n uint256 reward2keeper;\\n if (provider == msg.sender) {\\n collateralAsset.safeTransfer(msg.sender, reducedAsset);\\n } else {\\n reward2keeper = (reducedAsset \\* configurator.vaultKeeperRatio(address(this))) / 110;\\n collateralAsset.safeTransfer(provider, reducedAsset - reward2keeper);\\n collateralAsset.safeTransfer(msg.sender, reward2keeper);\\n }\\n emit LiquidationRecord(provider, msg.sender, onBehalfOf, eusdAmount, reducedAsset, reward2keeper, false, block.timestamp);\\n}\\n```\\n |
Use the Same Solidity Version Across Contracts. | low | Most contracts use the same Solidity version with `pragma solidity ^0.8.17`. The only exception is the `StakingRewardsV2` contract which has `pragma solidity ^0.8`.\\n```\\npragma solidity ^0.8;\\n```\\n | If all contracts will be tested and utilized together, it would be best to utilize and document the same version within all contract code to avoid any issues and inconsistencies that may arise across Solidity versions. | null | ```\\npragma solidity ^0.8;\\n```\\n |
Missing Events. | low | In a few cases in the Lybra Protocol system, there are contracts that are missing events in significant scenarios, such as important configuration changes like a price oracle change. Consider implementing more events in the below examples.\\nNo events in the contract:\\n```\\ncontract esLBRBoost is Ownable {\\n esLBRLockSetting[] public esLBRLockSettings;\\n mapping(address => LockStatus) public userLockStatus;\\n IMiningIncentives public miningIncentives;\\n\\n // Define a struct for the lock settings\\n struct esLBRLockSetting {\\n uint256 duration;\\n uint256 miningBoost;\\n }\\n\\n // Define a struct for the user's lock status\\n struct LockStatus {\\n uint256 lockAmount;\\n uint256 unlockTime;\\n uint256 duration;\\n uint256 miningBoost;\\n }\\n\\n // Constructor to initialize the default lock settings\\n constructor(address \\_miningIncentives) {\\n```\\n\\nMissing an event during a premature unlock:\\n```\\nfunction unlockPrematurely() external {\\n require(block.timestamp + exitCycle - 3 days > time2fullRedemption[msg.sender], "ENW");\\n uint256 burnAmount = getReservedLBRForVesting(msg.sender) - getPreUnlockableAmount(msg.sender);\\n uint256 amount = getPreUnlockableAmount(msg.sender) + getClaimAbleLBR(msg.sender);\\n if (amount > 0) {\\n LBR.mint(msg.sender, amount);\\n }\\n unstakeRatio[msg.sender] = 0;\\n time2fullRedemption[msg.sender] = 0;\\n grabableAmount += burnAmount;\\n}\\n```\\n\\nMissing events for setting important configurations such as `setToken`, `setLBROracle`, and setPools:\\n```\\nfunction setToken(address \\_lbr, address \\_eslbr) external onlyOwner {\\n LBR = \\_lbr;\\n esLBR = \\_eslbr;\\n}\\n\\nfunction setLBROracle(address \\_lbrOracle) external onlyOwner {\\n lbrPriceFeed = AggregatorV3Interface(\\_lbrOracle);\\n}\\n\\nfunction setPools(address[] memory \\_vaults) external onlyOwner {\\n require(\\_vaults.length <= 10, "EL");\\n for (uint i = 0; i < \\_vaults.length; i++) {\\n require(configurator.mintVault(\\_vaults[i]), "NOT\\_VAULT");\\n }\\n vaults = \\_vaults;\\n}\\n```\\n\\nMissing events for setting important configurations such as `setRewardsDuration` and setBoost:\\n```\\n// Allows the owner to set the rewards duration\\nfunction setRewardsDuration(uint256 \\_duration) external onlyOwner {\\n require(finishAt < block.timestamp, "reward duration not finished");\\n duration = \\_duration;\\n}\\n\\n// Allows the owner to set the boost contract address\\nfunction setBoost(address \\_boost) external onlyOwner {\\n esLBRBoost = IesLBRBoost(\\_boost);\\n}\\n```\\n\\nMissing event during what is essentially staking `LBR` into `esLBR` (such as in ProtocolRewardsPool.stake()). Consider an appropriate event here such as StakeLBR:\\n```\\nif(useLBR) {\\n IesLBR(miningIncentives.LBR()).burn(msg.sender, lbrAmount);\\n IesLBR(miningIncentives.esLBR()).mint(msg.sender, lbrAmount);\\n}\\n```\\n | Implement additional events as appropriate. | null | ```\\ncontract esLBRBoost is Ownable {\\n esLBRLockSetting[] public esLBRLockSettings;\\n mapping(address => LockStatus) public userLockStatus;\\n IMiningIncentives public miningIncentives;\\n\\n // Define a struct for the lock settings\\n struct esLBRLockSetting {\\n uint256 duration;\\n uint256 miningBoost;\\n }\\n\\n // Define a struct for the user's lock status\\n struct LockStatus {\\n uint256 lockAmount;\\n uint256 unlockTime;\\n uint256 duration;\\n uint256 miningBoost;\\n }\\n\\n // Constructor to initialize the default lock settings\\n constructor(address \\_miningIncentives) {\\n```\\n |
Incorrect Interfaces | low | In a few cases, incorrect interfaces are used on top of contracts. Though the effect is the same as the contracts are just tokens and follow the same interfaces, it is best practice to implement correct interfaces.\\n`IPeUSD` is used instead of `IEUSD`\\n```\\nIPeUSD public EUSD;\\n```\\n\\n`IPeUSD` is used instead of `IEUSD`\\n```\\nif (address(EUSD) == address(0)) EUSD = IPeUSD(\\_eusd);\\n```\\n\\n`IesLBR` instead of `ILBR`\\n```\\nIesLBR public LBR;\\n```\\n\\n`IesLBR` instead of `ILBR`\\n```\\nLBR = IesLBR(\\_lbr);\\n```\\n | Implement correct interfaces for consistency. | null | ```\\nIPeUSD public EUSD;\\n```\\n |
Production Builds Allow Development and Localhost Origins; Snap Does Not Enforce Transport Security | medium | The snaps RPC access is restricted to certain origins only. However, there is no logic that disables development/test domains from origin checks in production builds.\\nSolflare Snap\\n../solflare-snap/src/index.js:L7-L17\\n```\\nmodule.exports.onRpcRequest = async ({ origin, request }) => {\\n if (\\n !origin ||\\n (\\n !origin.match(/^https?:\\/\\/localhost:[0-9]{1,4}$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?solflare\\.com$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?solflare\\.dev$/)\\n )\\n ) {\\n throw new Error('Invalid origin');\\n }\\n```\\n\\nAptos Snap\\n../aptos-snap/src/index.js:L6-L15\\n```\\nmodule.exports.onRpcRequest = async ({ origin, request }) => {\\n if (\\n !origin ||\\n (\\n !origin.match(/^https?:\\/\\/localhost:[0-9]{1,4}$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?risewallet\\.dev$/)\\n )\\n ) {\\n throw new Error('Invalid origin');\\n }\\n```\\n\\nSui Snap\\n../sui-snap/src/index.js:L8-L17\\n```\\nmodule.exports.onRpcRequest = async ({ origin, request }) => {\\n if (\\n !origin ||\\n (\\n !origin.match(/^https?:\\/\\/localhost:[0-9]{1,4}$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?elliwallet\\.dev$/)\\n )\\n ) {\\n throw new Error('Invalid origin');\\n }\\n```\\n | Implement logic that removes development/localhost origin from the allow list for production builds. Employ strict checks on the format of provided origin. Do not by default allow all subdomains. | null | ```\\nmodule.exports.onRpcRequest = async ({ origin, request }) => {\\n if (\\n !origin ||\\n (\\n !origin.match(/^https?:\\/\\/localhost:[0-9]{1,4}$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?solflare\\.com$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?solflare\\.dev$/)\\n )\\n ) {\\n throw new Error('Invalid origin');\\n }\\n```\\n |
Production Builds Allow Development and Localhost Origins; Snap Does Not Enforce Transport Security Partially Addressed | medium | The snaps RPC access is restricted to certain origins only. However, there is no logic that disables development/test domains from origin checks in production builds.\\nSolflare Snap\\n../solflare-snap/src/index.js:L7-L17\\n```\\nmodule.exports.onRpcRequest = async ({ origin, request }) => {\\n if (\\n !origin ||\\n (\\n !origin.match(/^https?:\\/\\/localhost:[0-9]{1,4}$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?solflare\\.com$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?solflare\\.dev$/)\\n )\\n ) {\\n throw new Error('Invalid origin');\\n }\\n```\\n\\nAptos Snap\\n../aptos-snap/src/index.js:L6-L15\\n```\\nmodule.exports.onRpcRequest = async ({ origin, request }) => {\\n if (\\n !origin ||\\n (\\n !origin.match(/^https?:\\/\\/localhost:[0-9]{1,4}$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?risewallet\\.dev$/)\\n )\\n ) {\\n throw new Error('Invalid origin');\\n }\\n```\\n\\nSui Snap\\n../sui-snap/src/index.js:L8-L17\\n```\\nmodule.exports.onRpcRequest = async ({ origin, request }) => {\\n if (\\n !origin ||\\n (\\n !origin.match(/^https?:\\/\\/localhost:[0-9]{1,4}$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?elliwallet\\.dev$/)\\n )\\n ) {\\n throw new Error('Invalid origin');\\n }\\n```\\n | Resolution\\nThe client has issued the following statement:\\nChangesets:\\nsolflare-wallet/solflare-snap@749d2b0\\nsolflare-wallet/aptos-snap@eef10b5\\nsolflare-wallet/sui-snap@898295f\\nStatement from the Assessment Team:\\nImplement logic that removes development/localhost origin from the allow list for production builds. Employ strict checks on the format of provided origin. Do not by default allow all subdomains. | null | ```\\nmodule.exports.onRpcRequest = async ({ origin, request }) => {\\n if (\\n !origin ||\\n (\\n !origin.match(/^https?:\\/\\/localhost:[0-9]{1,4}$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?solflare\\.com$/) &&\\n !origin.match(/^https?:\\/\\/(?:\\S+\\.)?solflare\\.dev$/)\\n )\\n ) {\\n throw new Error('Invalid origin');\\n }\\n```\\n |
All Roles Are Set to the Same Account. | low | From talking to the team we know that all roles will be held by different timelock contracts. In the code they all are initiated to the same `admin` address. That would mean that most roles would need to be transferred. Given that each transfer take 2 transactions and there are 3 roles to transfer that would equate to 6 transactions just to properly set up the contract on deployment. That also increments the time it would take and space for making errors.\\nIt is also should be noted that the `regulator` role is not being initialized there at all.\\n```\\n// solhint-disable-next-line func-name-mixedcase\\nfunction \\_\\_DramAccessControl\\_init\\_unchained(\\n address admin\\n) internal onlyInitializing {\\n \\_grantRole(ADMIN\\_ROLE, admin);\\n \\_grantRole(ROLE\\_MANAGER\\_ROLE, admin);\\n \\_grantRole(SUPPLY\\_MANAGER\\_ROLE, admin);\\n}\\n```\\n | Resolution\\nAll roles, including regulatory manager, are now set to different accounts. The modification can be found in commit `b70348e6998e35282212243ea639d174ced1ef2d`\\nWe suggest passing several addresses into the constructor and setting them to the correct addresses right away. Alternatively one can not set them at all and grant those roles later in order to avoid revoking the roles that admin should not have, such as `SUPPLY_MANAGER_ROLE`. | null | ```\\n// solhint-disable-next-line func-name-mixedcase\\nfunction \\_\\_DramAccessControl\\_init\\_unchained(\\n address admin\\n) internal onlyInitializing {\\n \\_grantRole(ADMIN\\_ROLE, admin);\\n \\_grantRole(ROLE\\_MANAGER\\_ROLE, admin);\\n \\_grantRole(SUPPLY\\_MANAGER\\_ROLE, admin);\\n}\\n```\\n |
Setting MintCap to a Specific Value Is Prone to Front-Running. | low | `Dram` stable coin is using the approval-like model to set the minting caps of different operators, thus it is prone to the same front-run issues as the approval mechanism. When using the `setMintCap` function directly operator could front-run the transaction and completely spend the old cap and then spend the new one again after setting the transaction goes through.\\n```\\nfunction setMintCap(\\n address operator,\\n uint256 amount\\n) external onlyRoleOrAdmin(ROLE\\_MANAGER\\_ROLE) {\\n \\_setMintCap(operator, amount);\\n}\\n```\\n\\nImagine the following scenario:\\nAlice has a mint cap of 10.\\nA transaction is sent to the mem-pool to set it to 5 (decrease the cap). The intent is that Alice should only be able to mint 5 tokens.\\nAlice frontruns this transaction and mints 10 tokens.\\nOnce transaction 2 goes through Alice mints 5 more tokens.\\nIn total Alice minted 15 tokens. | Avoid using setting the specific mint caps and rather use increase/decrease methods that are present in the code already. | null | ```\\nfunction setMintCap(\\n address operator,\\n uint256 amount\\n) external onlyRoleOrAdmin(ROLE\\_MANAGER\\_ROLE) {\\n \\_setMintCap(operator, amount);\\n}\\n```\\n |
Incorrect Priviliges setOperatorAddresses Acknowledged | high | The function `setOperatorAddresses` instead of allowing the Operator to update its own, as well as the Fee Recipient address, incorrectly provides the privileges to the Fee Recipient. As a result, the Fee Recipient can modify the operator address as and when needed, to DoS the operator and exploit the system. Additionally, upon reviewing the documentation, we found that there are no administrative rights defined for the Fee Recipient, hence highlighting the incorrect privilege allocation.\\n```\\nfunction setOperatorAddresses(\\n uint256 \\_operatorIndex,\\n address \\_operatorAddress,\\n address \\_feeRecipientAddress\\n) external onlyActiveOperatorFeeRecipient(\\_operatorIndex) {\\n \\_checkAddress(\\_operatorAddress);\\n \\_checkAddress(\\_feeRecipientAddress);\\n StakingContractStorageLib.OperatorsSlot storage operators = StakingContractStorageLib.getOperators();\\n\\n operators.value[\\_operatorIndex].operator = \\_operatorAddress;\\n operators.value[\\_operatorIndex].feeRecipient = \\_feeRecipientAddress;\\n emit ChangedOperatorAddresses(\\_operatorIndex, \\_operatorAddress, \\_feeRecipientAddress);\\n}\\n```\\n | The modifier should be `onlyActiveOperatorOrAdmin` allowing only the operator itself or admin of the system, to update the necessary addresses.\\nAlso, for transferring crucial privileges from one address to another, the operator's address should follow a 2-step approach like transferring ownership. | null | ```\\nfunction setOperatorAddresses(\\n uint256 \\_operatorIndex,\\n address \\_operatorAddress,\\n address \\_feeRecipientAddress\\n) external onlyActiveOperatorFeeRecipient(\\_operatorIndex) {\\n \\_checkAddress(\\_operatorAddress);\\n \\_checkAddress(\\_feeRecipientAddress);\\n StakingContractStorageLib.OperatorsSlot storage operators = StakingContractStorageLib.getOperators();\\n\\n operators.value[\\_operatorIndex].operator = \\_operatorAddress;\\n operators.value[\\_operatorIndex].feeRecipient = \\_feeRecipientAddress;\\n emit ChangedOperatorAddresses(\\_operatorIndex, \\_operatorAddress, \\_feeRecipientAddress);\\n}\\n```\\n |
Unconstrained Snapshot While Setting Operator Limit | medium | Function `setOperatorLimit` as the name says, allows the `SYS_ADMIN` to set/update the staking limit for an operator. The function ensures that if the limit is being increased, the `_snapshot` must be ahead of the last validator edit(block.number at which the last validator edit occurred). However, the parameter `_snapshot` is unconstrained and can be any number. Also, the functions `addValidators` and `removeValidators` update the `block.number` signifying the last validator edit, but never constrain the new edits with it. Since there are no publicly available functions to access this value, makes the functionality even more confusing and may be unnecessary.\\n```\\nif (\\n operators.value[\\_operatorIndex].limit < \\_limit &&\\n StakingContractStorageLib.getLastValidatorEdit() > \\_snapshot\\n) {\\n revert LastEditAfterSnapshot();\\n}\\n```\\n | If the functionality is not needed, consider removing it. Otherwise, add some necessary logic to either constrain the last validator edit or add public functions for the users to access it. | null | ```\\nif (\\n operators.value[\\_operatorIndex].limit < \\_limit &&\\n StakingContractStorageLib.getLastValidatorEdit() > \\_snapshot\\n) {\\n revert LastEditAfterSnapshot();\\n}\\n```\\n |
Hardcoded Operator Limit Logic | medium | The contract defines some hardcoded limits which is not the right approach for upgradeable contracts and opens doors for accidental mistakes, if not handled with care.\\nThe operators for the current version are limited to 1. If the auditee team decides to open the system to work with more operators but fails to change the limit while upgrading, the upgraded contract will have no effect, and will still disallow any more operators to be added.\\n```\\nfunction addOperator(address \\_operatorAddress, address \\_feeRecipientAddress) external onlyAdmin returns (uint256) {\\n StakingContractStorageLib.OperatorsSlot storage operators = StakingContractStorageLib.getOperators();\\n StakingContractStorageLib.OperatorInfo memory newOperator;\\n\\n if (operators.value.length == 1) {\\n revert MaximumOperatorCountAlreadyReached();\\n }\\n```\\n\\nAlso, the function `_depositOnOneOperator` hardcodes the operator Index as 0 since the contract only supports one operator.\\n```\\nfunction \\_depositOnOneOperator(uint256 \\_depositCount, uint256 \\_totalAvailableValidators) internal {\\n StakingContractStorageLib.setTotalAvailableValidators(\\_totalAvailableValidators - \\_depositCount);\\n \\_depositValidatorsOfOperator(0, \\_depositCount);\\n}\\n```\\n | A better approach could be to constrain the limit of operators that can be added with a storage variable or constant, provided at the time of contract initialization. The contract should also consider supporting dynamic operator deposits for future versions instead of the default hardcoded index. | null | ```\\nfunction addOperator(address \\_operatorAddress, address \\_feeRecipientAddress) external onlyAdmin returns (uint256) {\\n StakingContractStorageLib.OperatorsSlot storage operators = StakingContractStorageLib.getOperators();\\n StakingContractStorageLib.OperatorInfo memory newOperator;\\n\\n if (operators.value.length == 1) {\\n revert MaximumOperatorCountAlreadyReached();\\n }\\n```\\n |
StakingContract - PubKey Length Checks Not Always Enforced | medium | `addValidators` checks that the provided `bytes pubKey` is a multiple of the expected pubkey length while functions like `setWithdrawer` do not enforce similar length checks. This is an inconsistency that should be avoided.\\n`addValidators` enforcing input length checks\\n```\\nfunction addValidators(\\n uint256 \\_operatorIndex,\\n uint256 \\_keyCount,\\n bytes calldata \\_publicKeys,\\n bytes calldata \\_signatures\\n) external onlyActiveOperator(\\_operatorIndex) {\\n if (\\_keyCount == 0) {\\n revert InvalidArgument();\\n }\\n\\n if (\\_publicKeys.length % PUBLIC\\_KEY\\_LENGTH != 0 || \\_publicKeys.length / PUBLIC\\_KEY\\_LENGTH != \\_keyCount) {\\n revert InvalidPublicKeys();\\n }\\n```\\n\\n`setWithdrawer` accepting any length for a `pubKey`. Note that `_getPubKeyRoot` will take any input provided and concat it the zero bytes.\\n```\\n/// @notice Set withdrawer for public key\\n/// @dev Only callable by current public key withdrawer\\n/// @param \\_publicKey Public key to change withdrawer\\n/// @param \\_newWithdrawer New withdrawer address\\nfunction setWithdrawer(bytes calldata \\_publicKey, address \\_newWithdrawer) external {\\n if (!StakingContractStorageLib.getWithdrawerCustomizationEnabled()) {\\n revert Forbidden();\\n }\\n \\_checkAddress(\\_newWithdrawer);\\n bytes32 pubkeyRoot = \\_getPubKeyRoot(\\_publicKey);\\n StakingContractStorageLib.WithdrawersSlot storage withdrawers = StakingContractStorageLib.getWithdrawers();\\n\\n if (withdrawers.value[pubkeyRoot] != msg.sender) {\\n revert Unauthorized();\\n }\\n\\n emit ChangedWithdrawer(\\_publicKey, \\_newWithdrawer);\\n\\n withdrawers.value[pubkeyRoot] = \\_newWithdrawer;\\n}\\n```\\n\\n```\\nfunction \\_getPubKeyRoot(bytes memory \\_publicKey) internal pure returns (bytes32) {\\n return sha256(abi.encodePacked(\\_publicKey, bytes16(0)));\\n}\\n```\\n\\nsimilarly, the withdraw family of functions does not enforce a pubkey length either. However, it is unlikely that someone finds a pubkey that matches a root for the attackers address.\\n```\\n/// @notice Withdraw the Execution Layer Fee for a given validator public key\\n/// @dev Funds are sent to the withdrawer account\\n/// @param \\_publicKey Validator to withdraw Execution Layer Fees from\\nfunction withdrawELFee(bytes calldata \\_publicKey) external {\\n \\_onlyWithdrawerOrAdmin(\\_publicKey);\\n \\_deployAndWithdraw(\\_publicKey, EXECUTION\\_LAYER\\_SALT\\_PREFIX, StakingContractStorageLib.getELDispatcher());\\n}\\n```\\n\\nNevertheless, the methods should be hardened so as not to give a malicious actor the freedom to use an unexpected input size for the `pubKey` argument. | Enforce pubkey length checks when accepting a single pubkey as bytes similar to the batch functions that check for a multiple of ´PUBLIC_KEY_LENGTH´. Alternatively, declare the function argument as `bytes48` (however, in this case inputs may be auto-padded to fit the expected length, pot. covering situations that otherwise would throw an error) | null | ```\\nfunction addValidators(\\n uint256 \\_operatorIndex,\\n uint256 \\_keyCount,\\n bytes calldata \\_publicKeys,\\n bytes calldata \\_signatures\\n) external onlyActiveOperator(\\_operatorIndex) {\\n if (\\_keyCount == 0) {\\n revert InvalidArgument();\\n }\\n\\n if (\\_publicKeys.length % PUBLIC\\_KEY\\_LENGTH != 0 || \\_publicKeys.length / PUBLIC\\_KEY\\_LENGTH != \\_keyCount) {\\n revert InvalidPublicKeys();\\n }\\n```\\n |
Unpredictable Behavior Due to Admin Front Running or General Bad Timing | medium | In a number of cases, administrators of contracts can update or upgrade things in the system without warning. This has the potential to violate a security goal of the system.\\nSpecifically, privileged roles could use front running to make malicious changes just ahead of incoming transactions, or purely accidental negative effects could occur due to the unfortunate timing of changes.\\nSome instances of this are more important than others, but in general, users of the system should have assurances about the behavior of the action they're about to take.\\nUpgradeable TU proxy\\nFee changes take effect immediately\\n```\\n/// @notice Change the Operator fee\\n/// @param \\_operatorFee Fee in Basis Point\\nfunction setOperatorFee(uint256 \\_operatorFee) external onlyAdmin {\\n if (\\_operatorFee > StakingContractStorageLib.getOperatorCommissionLimit()) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setOperatorFee(\\_operatorFee);\\n emit ChangedOperatorFee(\\_operatorFee);\\n}\\n```\\n\\n```\\n/// @notice Change the Global fee\\n/// @param \\_globalFee Fee in Basis Point\\nfunction setGlobalFee(uint256 \\_globalFee) external onlyAdmin {\\n if (\\_globalFee > StakingContractStorageLib.getGlobalCommissionLimit()) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setGlobalFee(\\_globalFee);\\n emit ChangedGlobalFee(\\_globalFee);\\n}\\n```\\n | The underlying issue is that users of the system can't be sure what the behavior of a function call will be, and this is because the behavior can change at any time.\\nWe recommend giving the user advance notice of changes with a time lock. For example, make all upgrades require two steps with a mandatory time window between them. The first step merely broadcasts to users that a particular change is coming, and the second step commits that change after a suitable waiting period. | null | ```\\n/// @notice Change the Operator fee\\n/// @param \\_operatorFee Fee in Basis Point\\nfunction setOperatorFee(uint256 \\_operatorFee) external onlyAdmin {\\n if (\\_operatorFee > StakingContractStorageLib.getOperatorCommissionLimit()) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setOperatorFee(\\_operatorFee);\\n emit ChangedOperatorFee(\\_operatorFee);\\n}\\n```\\n |
Potentially Uninitialized Implementations | medium | Most contracts in the system are meant to be used with a proxy pattern. First, the implementations are deployed, and then proxies are deployed that delegatecall into the respective implementations following an initialization call (hardhat, with same transaction). However, the implementations are initialized explicitly nor are they protected from other actors claiming/initializing them. This allows anyone to call initialization functions on implementations for use with phishing attacks (i.e. contract implementation addresses are typically listed on the official project website as valid contracts) which may affect the reputation of the system.\\nNone of the implementations allow unprotected delegatecalls or selfdesturcts. lowering the severity of this finding.\\n```\\nfunction initialize\\_1(\\n address \\_admin,\\n address \\_treasury,\\n address \\_depositContract,\\n address \\_elDispatcher,\\n address \\_clDispatcher,\\n address \\_feeRecipientImplementation,\\n uint256 \\_globalFee,\\n uint256 \\_operatorFee,\\n uint256 globalCommissionLimitBPS,\\n uint256 operatorCommissionLimitBPS\\n) external init(1) {\\n```\\n\\n```\\n/// @notice Initializes the receiver\\n/// @param \\_dispatcher Address that will handle the fee dispatching\\n/// @param \\_publicKeyRoot Public Key root assigned to this receiver\\nfunction init(address \\_dispatcher, bytes32 \\_publicKeyRoot) external {\\n if (initialized) {\\n revert AlreadyInitialized();\\n }\\n initialized = true;\\n dispatcher = IFeeDispatcher(\\_dispatcher);\\n publicKeyRoot = \\_publicKeyRoot;\\n stakingContract = msg.sender; // The staking contract always calls init\\n}\\n```\\n\\n```\\n/// @param \\_publicKeyRoot Public Key root assigned to this receiver\\nfunction init(address \\_dispatcher, bytes32 \\_publicKeyRoot) external {\\n if (initialized) {\\n revert AlreadyInitialized();\\n }\\n initialized = true;\\n dispatcher = IFeeDispatcher(\\_dispatcher);\\n publicKeyRoot = \\_publicKeyRoot;\\n}\\n```\\n | Petrify contracts in the constructor and disallow other actors from claiming/initializing the implementations. | null | ```\\nfunction initialize\\_1(\\n address \\_admin,\\n address \\_treasury,\\n address \\_depositContract,\\n address \\_elDispatcher,\\n address \\_clDispatcher,\\n address \\_feeRecipientImplementation,\\n uint256 \\_globalFee,\\n uint256 \\_operatorFee,\\n uint256 globalCommissionLimitBPS,\\n uint256 operatorCommissionLimitBPS\\n) external init(1) {\\n```\\n |
Operator May DoS the Withdrawal or Make It More Expensive | medium | While collecting fees, the operator may:\\ncause DoS for the funds/rewards withdrawal by reverting the call, thus reverting the whole transaction. By doing this, it won't be receiving any rewards, but so the treasury and withdrawer.\\nmake the withdrawal more expensive by sending a huge chunk of `returndata`. As the `returndata` is copied into memory in the caller's context, it will add an extra gas overhead for the withdrawer making it more expensive.\\nor mint gas token\\n```\\nif (operatorFee > 0) {\\n (status, data) = operator.call{value: operatorFee}("");\\n if (status == false) {\\n revert FeeRecipientReceiveError(data);\\n }\\n}\\n```\\n | A possible solution could be to make a low-level call in an inline assembly block, restricting the `returndata` to a couple of bytes, and instead of reverting on the failed call, emit an event, flagging the call that failed. | null | ```\\nif (operatorFee > 0) {\\n (status, data) = operator.call{value: operatorFee}("");\\n if (status == false) {\\n revert FeeRecipientReceiveError(data);\\n }\\n}\\n```\\n |
ConsensusLayerFeeDispatcher/ExecutionLayerFeeDispatcher - Should Hardcode autoPetrify With Highest Initializable Version Instead of User Provided Argument | low | The version to auto-initialize is not hardcoded with the constructor. On deployment, the deployer may accidentally use the wrong version, allowing anyone to call `initialize` on the contract.\\n```\\n/// @notice Constructor method allowing us to prevent calls to initCLFR by setting the appropriate version\\nconstructor(uint256 \\_version) {\\n VERSION\\_SLOT.setUint256(\\_version);\\n}\\n```\\n\\n```\\n/// @notice Constructor method allowing us to prevent calls to initCLFR by setting the appropriate version\\nconstructor(uint256 \\_version) {\\n VERSION\\_SLOT.setUint256(\\_version);\\n}\\n\\n/// @notice Initialize the contract by storing the staking contract and the public key in storage\\n/// @param \\_stakingContract Address of the Staking Contract\\nfunction initELD(address \\_stakingContract) external init(1) {\\n STAKING\\_CONTRACT\\_ADDRESS\\_SLOT.setAddress(\\_stakingContract);\\n}\\n```\\n | Similar to the `init(1)` modifier, it is suggested to track the highest version as a `const int` with the contract and auto-initialize to the highest version in the constructor instead of taking the highest version as a deployment argument. | null | ```\\n/// @notice Constructor method allowing us to prevent calls to initCLFR by setting the appropriate version\\nconstructor(uint256 \\_version) {\\n VERSION\\_SLOT.setUint256(\\_version);\\n}\\n```\\n |
StakingContract - Misleading Comment | low | The comment notes that the expected caller is `admin` while the modifier checks that `msg.sender` is an active operator.\\n```\\n/// @notice Ensures that the caller is the admin\\nmodifier onlyActiveOperator(uint256 \\_operatorIndex) {\\n \\_onlyActiveOperator(\\_operatorIndex);\\n \\_;\\n}\\n```\\n | Rectify the comment to accurately describe the intention of the method/modifier. | null | ```\\n/// @notice Ensures that the caller is the admin\\nmodifier onlyActiveOperator(uint256 \\_operatorIndex) {\\n \\_onlyActiveOperator(\\_operatorIndex);\\n \\_;\\n}\\n```\\n |
Impractical Checks for Global/Operator Fees and the Commission Limits | low | The contract initialization sets up the global and operator fees and also their commission limits. However, the checks just make sure that the fees or commission limit is up to 100% which is not a very practical check. Any unusual value, for instance, if set to 100% will mean the whole rewards/funds will be non-exempted and taxed as global fees, which we believe will never be a case practically.\\n```\\nif (\\_globalFee > BASIS\\_POINTS) {\\n revert InvalidFee();\\n}\\nStakingContractStorageLib.setGlobalFee(\\_globalFee);\\nif (\\_operatorFee > BASIS\\_POINTS) {\\n revert InvalidFee();\\n}\\nStakingContractStorageLib.setOperatorFee(\\_operatorFee);\\n```\\n\\n```\\nfunction initialize\\_2(uint256 globalCommissionLimitBPS, uint256 operatorCommissionLimitBPS) public init(2) {\\n if (globalCommissionLimitBPS > BASIS\\_POINTS) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setGlobalCommissionLimit(globalCommissionLimitBPS);\\n if (operatorCommissionLimitBPS > BASIS\\_POINTS) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setOperatorCommissionLimit(operatorCommissionLimitBPS);\\n}\\n```\\n\\n```\\nfunction setGlobalFee(uint256 \\_globalFee) external onlyAdmin {\\n if (\\_globalFee > StakingContractStorageLib.getGlobalCommissionLimit()) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setGlobalFee(\\_globalFee);\\n emit ChangedGlobalFee(\\_globalFee);\\n}\\n```\\n\\n```\\nfunction setOperatorFee(uint256 \\_operatorFee) external onlyAdmin {\\n if (\\_operatorFee > StakingContractStorageLib.getOperatorCommissionLimit()) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setOperatorFee(\\_operatorFee);\\n emit ChangedOperatorFee(\\_operatorFee);\\n}\\n```\\n | The fees should be checked with a more practical limit. For instance, checking against a min - max limit, like 20% - 40%. | null | ```\\nif (\\_globalFee > BASIS\\_POINTS) {\\n revert InvalidFee();\\n}\\nStakingContractStorageLib.setGlobalFee(\\_globalFee);\\nif (\\_operatorFee > BASIS\\_POINTS) {\\n revert InvalidFee();\\n}\\nStakingContractStorageLib.setOperatorFee(\\_operatorFee);\\n```\\n |
Contracts Should Inherit From Their Interfaces | low | The following contracts should enforce correct interface implementation by inheriting from the interface declarations.\\n```\\n/// @title Ethereum Staking Contract\\n/// @author Kiln\\n/// @notice You can use this contract to store validator keys and have users fund them and trigger deposits.\\ncontract StakingContract {\\n using StakingContractStorageLib for bytes32;\\n```\\n\\n```\\ninterface IStakingContractFeeDetails {\\n function getWithdrawerFromPublicKeyRoot(bytes32 \\_publicKeyRoot) external view returns (address);\\n\\n function getTreasury() external view returns (address);\\n\\n function getOperatorFeeRecipient(bytes32 pubKeyRoot) external view returns (address);\\n\\n function getGlobalFee() external view returns (uint256);\\n\\n function getOperatorFee() external view returns (uint256);\\n\\n function getExitRequestedFromRoot(bytes32 \\_publicKeyRoot) external view returns (bool);\\n\\n function getWithdrawnFromPublicKeyRoot(bytes32 \\_publicKeyRoot) external view returns (bool);\\n\\n function toggleWithdrawnFromPublicKeyRoot(bytes32 \\_publicKeyRoot) external;\\n}\\n```\\n\\n```\\ninterface IFeeRecipient {\\n function init(address \\_dispatcher, bytes32 \\_publicKeyRoot) external;\\n\\n function withdraw() external;\\n}\\n```\\n | Inherit from interface. | null | ```\\n/// @title Ethereum Staking Contract\\n/// @author Kiln\\n/// @notice You can use this contract to store validator keys and have users fund them and trigger deposits.\\ncontract StakingContract {\\n using StakingContractStorageLib for bytes32;\\n```\\n |
Misleading Error Statements | low | The contracts define custom errors to revert transactions on failed operations or invalid input, however, they convey little to no information, making it difficult for the off-chain monitoring tools to track relevant updates.\\n```\\nerror Forbidden();\\nerror InvalidFee();\\nerror Deactivated();\\nerror NoOperators();\\nerror InvalidCall();\\nerror Unauthorized();\\nerror DepositFailure();\\nerror DepositsStopped();\\nerror InvalidArgument();\\nerror UnsortedIndexes();\\nerror InvalidPublicKeys();\\nerror InvalidSignatures();\\nerror InvalidWithdrawer();\\nerror InvalidZeroAddress();\\nerror AlreadyInitialized();\\nerror InvalidDepositValue();\\nerror NotEnoughValidators();\\nerror InvalidValidatorCount();\\nerror DuplicateValidatorKey(bytes);\\nerror FundedValidatorDeletionAttempt();\\nerror OperatorLimitTooHigh(uint256 limit, uint256 keyCount);\\nerror MaximumOperatorCountAlreadyReached();\\nerror LastEditAfterSnapshot();\\nerror PublicKeyNotInContract();\\n```\\n\\nFor instance, the `init` modifier is used to initialize the contracts with the current Version. The Version initialization ensures that the provided version must be an increment of the previous version, if not, it reverts with an error as `AlreadyInitialized()`. However, the error doesn't convey an appropriate message correctly, as any version other than the expected version will signify that the version has already been initialized.\\n```\\nmodifier init(uint256 \\_version) {\\n if (\\_version != VERSION\\_SLOT.getUint256() + 1) {\\n revert AlreadyInitialized();\\n }\\n```\\n\\n```\\nmodifier init(uint256 \\_version) {\\n if (\\_version != VERSION\\_SLOT.getUint256() + 1) {\\n revert AlreadyInitialized();\\n }\\n```\\n\\n```\\nmodifier init(uint256 \\_version) {\\n if (\\_version != StakingContractStorageLib.getVersion() + 1) {\\n revert AlreadyInitialized();\\n }\\n```\\n | Use a more meaningful statement with enough information to track off-chain for all the custom errors in every contract in scope. For instance, add the current and supplied versions as indexed parameters, like: IncorrectVersionInitialization(current version, supplied version);\\nAlso, the function can be simplified as\\n```\\n function initELD(address \\_stakingContract) external init(VERSION\\_SLOT.getUint256() + 1) {\\n STAKING\\_CONTRACT\\_ADDRESS\\_SLOT.setAddress(\\_stakingContract);\\n }\\n```\\n | null | ```\\nerror Forbidden();\\nerror InvalidFee();\\nerror Deactivated();\\nerror NoOperators();\\nerror InvalidCall();\\nerror Unauthorized();\\nerror DepositFailure();\\nerror DepositsStopped();\\nerror InvalidArgument();\\nerror UnsortedIndexes();\\nerror InvalidPublicKeys();\\nerror InvalidSignatures();\\nerror InvalidWithdrawer();\\nerror InvalidZeroAddress();\\nerror AlreadyInitialized();\\nerror InvalidDepositValue();\\nerror NotEnoughValidators();\\nerror InvalidValidatorCount();\\nerror DuplicateValidatorKey(bytes);\\nerror FundedValidatorDeletionAttempt();\\nerror OperatorLimitTooHigh(uint256 limit, uint256 keyCount);\\nerror MaximumOperatorCountAlreadyReached();\\nerror LastEditAfterSnapshot();\\nerror PublicKeyNotInContract();\\n```\\n |
Incorrect Priviliges setOperatorAddresses Acknowledged | high | The function `setOperatorAddresses` instead of allowing the Operator to update its own, as well as the Fee Recipient address, incorrectly provides the privileges to the Fee Recipient. As a result, the Fee Recipient can modify the operator address as and when needed, to DoS the operator and exploit the system. Additionally, upon reviewing the documentation, we found that there are no administrative rights defined for the Fee Recipient, hence highlighting the incorrect privilege allocation.\\n```\\nfunction setOperatorAddresses(\\n uint256 \\_operatorIndex,\\n address \\_operatorAddress,\\n address \\_feeRecipientAddress\\n) external onlyActiveOperatorFeeRecipient(\\_operatorIndex) {\\n \\_checkAddress(\\_operatorAddress);\\n \\_checkAddress(\\_feeRecipientAddress);\\n StakingContractStorageLib.OperatorsSlot storage operators = StakingContractStorageLib.getOperators();\\n\\n operators.value[\\_operatorIndex].operator = \\_operatorAddress;\\n operators.value[\\_operatorIndex].feeRecipient = \\_feeRecipientAddress;\\n emit ChangedOperatorAddresses(\\_operatorIndex, \\_operatorAddress, \\_feeRecipientAddress);\\n}\\n```\\n | The modifier should be `onlyActiveOperatorOrAdmin` allowing only the operator itself or admin of the system, to update the necessary addresses.\\nAlso, for transferring crucial privileges from one address to another, the operator's address should follow a 2-step approach like transferring ownership. | null | ```\\nfunction setOperatorAddresses(\\n uint256 \\_operatorIndex,\\n address \\_operatorAddress,\\n address \\_feeRecipientAddress\\n) external onlyActiveOperatorFeeRecipient(\\_operatorIndex) {\\n \\_checkAddress(\\_operatorAddress);\\n \\_checkAddress(\\_feeRecipientAddress);\\n StakingContractStorageLib.OperatorsSlot storage operators = StakingContractStorageLib.getOperators();\\n\\n operators.value[\\_operatorIndex].operator = \\_operatorAddress;\\n operators.value[\\_operatorIndex].feeRecipient = \\_feeRecipientAddress;\\n emit ChangedOperatorAddresses(\\_operatorIndex, \\_operatorAddress, \\_feeRecipientAddress);\\n}\\n```\\n |
Unconstrained Snapshot While Setting Operator Limit | medium | Function `setOperatorLimit` as the name says, allows the `SYS_ADMIN` to set/update the staking limit for an operator. The function ensures that if the limit is being increased, the `_snapshot` must be ahead of the last validator edit(block.number at which the last validator edit occurred). However, the parameter `_snapshot` is unconstrained and can be any number. Also, the functions `addValidators` and `removeValidators` update the `block.number` signifying the last validator edit, but never constrain the new edits with it. Since there are no publicly available functions to access this value, makes the functionality even more confusing and may be unnecessary.\\n```\\nif (\\n operators.value[\\_operatorIndex].limit < \\_limit &&\\n StakingContractStorageLib.getLastValidatorEdit() > \\_snapshot\\n) {\\n revert LastEditAfterSnapshot();\\n}\\n```\\n | If the functionality is not needed, consider removing it. Otherwise, add some necessary logic to either constrain the last validator edit or add public functions for the users to access it. | null | ```\\nif (\\n operators.value[\\_operatorIndex].limit < \\_limit &&\\n StakingContractStorageLib.getLastValidatorEdit() > \\_snapshot\\n) {\\n revert LastEditAfterSnapshot();\\n}\\n```\\n |
Hardcoded Operator Limit Logic | medium | The contract defines some hardcoded limits which is not the right approach for upgradeable contracts and opens doors for accidental mistakes, if not handled with care.\\nThe operators for the current version are limited to 1. If the auditee team decides to open the system to work with more operators but fails to change the limit while upgrading, the upgraded contract will have no effect, and will still disallow any more operators to be added.\\n```\\nfunction addOperator(address \\_operatorAddress, address \\_feeRecipientAddress) external onlyAdmin returns (uint256) {\\n StakingContractStorageLib.OperatorsSlot storage operators = StakingContractStorageLib.getOperators();\\n StakingContractStorageLib.OperatorInfo memory newOperator;\\n\\n if (operators.value.length == 1) {\\n revert MaximumOperatorCountAlreadyReached();\\n }\\n```\\n\\nAlso, the function `_depositOnOneOperator` hardcodes the operator Index as 0 since the contract only supports one operator.\\n```\\nfunction \\_depositOnOneOperator(uint256 \\_depositCount, uint256 \\_totalAvailableValidators) internal {\\n StakingContractStorageLib.setTotalAvailableValidators(\\_totalAvailableValidators - \\_depositCount);\\n \\_depositValidatorsOfOperator(0, \\_depositCount);\\n}\\n```\\n | A better approach could be to constrain the limit of operators that can be added with a storage variable or constant, provided at the time of contract initialization. The contract should also consider supporting dynamic operator deposits for future versions instead of the default hardcoded index. | null | ```\\nfunction addOperator(address \\_operatorAddress, address \\_feeRecipientAddress) external onlyAdmin returns (uint256) {\\n StakingContractStorageLib.OperatorsSlot storage operators = StakingContractStorageLib.getOperators();\\n StakingContractStorageLib.OperatorInfo memory newOperator;\\n\\n if (operators.value.length == 1) {\\n revert MaximumOperatorCountAlreadyReached();\\n }\\n```\\n |
StakingContract - PubKey Length Checks Not Always Enforced | medium | `addValidators` checks that the provided `bytes pubKey` is a multiple of the expected pubkey length while functions like `setWithdrawer` do not enforce similar length checks. This is an inconsistency that should be avoided.\\n`addValidators` enforcing input length checks\\n```\\nfunction addValidators(\\n uint256 \\_operatorIndex,\\n uint256 \\_keyCount,\\n bytes calldata \\_publicKeys,\\n bytes calldata \\_signatures\\n) external onlyActiveOperator(\\_operatorIndex) {\\n if (\\_keyCount == 0) {\\n revert InvalidArgument();\\n }\\n\\n if (\\_publicKeys.length % PUBLIC\\_KEY\\_LENGTH != 0 || \\_publicKeys.length / PUBLIC\\_KEY\\_LENGTH != \\_keyCount) {\\n revert InvalidPublicKeys();\\n }\\n```\\n\\n`setWithdrawer` accepting any length for a `pubKey`. Note that `_getPubKeyRoot` will take any input provided and concat it the zero bytes.\\n```\\n/// @notice Set withdrawer for public key\\n/// @dev Only callable by current public key withdrawer\\n/// @param \\_publicKey Public key to change withdrawer\\n/// @param \\_newWithdrawer New withdrawer address\\nfunction setWithdrawer(bytes calldata \\_publicKey, address \\_newWithdrawer) external {\\n if (!StakingContractStorageLib.getWithdrawerCustomizationEnabled()) {\\n revert Forbidden();\\n }\\n \\_checkAddress(\\_newWithdrawer);\\n bytes32 pubkeyRoot = \\_getPubKeyRoot(\\_publicKey);\\n StakingContractStorageLib.WithdrawersSlot storage withdrawers = StakingContractStorageLib.getWithdrawers();\\n\\n if (withdrawers.value[pubkeyRoot] != msg.sender) {\\n revert Unauthorized();\\n }\\n\\n emit ChangedWithdrawer(\\_publicKey, \\_newWithdrawer);\\n\\n withdrawers.value[pubkeyRoot] = \\_newWithdrawer;\\n}\\n```\\n\\n```\\nfunction \\_getPubKeyRoot(bytes memory \\_publicKey) internal pure returns (bytes32) {\\n return sha256(abi.encodePacked(\\_publicKey, bytes16(0)));\\n}\\n```\\n\\nsimilarly, the withdraw family of functions does not enforce a pubkey length either. However, it is unlikely that someone finds a pubkey that matches a root for the attackers address.\\n```\\n/// @notice Withdraw the Execution Layer Fee for a given validator public key\\n/// @dev Funds are sent to the withdrawer account\\n/// @param \\_publicKey Validator to withdraw Execution Layer Fees from\\nfunction withdrawELFee(bytes calldata \\_publicKey) external {\\n \\_onlyWithdrawerOrAdmin(\\_publicKey);\\n \\_deployAndWithdraw(\\_publicKey, EXECUTION\\_LAYER\\_SALT\\_PREFIX, StakingContractStorageLib.getELDispatcher());\\n}\\n```\\n\\nNevertheless, the methods should be hardened so as not to give a malicious actor the freedom to use an unexpected input size for the `pubKey` argument. | Enforce pubkey length checks when accepting a single pubkey as bytes similar to the batch functions that check for a multiple of ´PUBLIC_KEY_LENGTH´. Alternatively, declare the function argument as `bytes48` (however, in this case inputs may be auto-padded to fit the expected length, pot. covering situations that otherwise would throw an error) | null | ```\\nfunction addValidators(\\n uint256 \\_operatorIndex,\\n uint256 \\_keyCount,\\n bytes calldata \\_publicKeys,\\n bytes calldata \\_signatures\\n) external onlyActiveOperator(\\_operatorIndex) {\\n if (\\_keyCount == 0) {\\n revert InvalidArgument();\\n }\\n\\n if (\\_publicKeys.length % PUBLIC\\_KEY\\_LENGTH != 0 || \\_publicKeys.length / PUBLIC\\_KEY\\_LENGTH != \\_keyCount) {\\n revert InvalidPublicKeys();\\n }\\n```\\n |
Unpredictable Behavior Due to Admin Front Running or General Bad Timing | medium | In a number of cases, administrators of contracts can update or upgrade things in the system without warning. This has the potential to violate a security goal of the system.\\nSpecifically, privileged roles could use front running to make malicious changes just ahead of incoming transactions, or purely accidental negative effects could occur due to the unfortunate timing of changes.\\nSome instances of this are more important than others, but in general, users of the system should have assurances about the behavior of the action they're about to take.\\nUpgradeable TU proxy\\nFee changes take effect immediately\\n```\\n/// @notice Change the Operator fee\\n/// @param \\_operatorFee Fee in Basis Point\\nfunction setOperatorFee(uint256 \\_operatorFee) external onlyAdmin {\\n if (\\_operatorFee > StakingContractStorageLib.getOperatorCommissionLimit()) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setOperatorFee(\\_operatorFee);\\n emit ChangedOperatorFee(\\_operatorFee);\\n}\\n```\\n\\n```\\n/// @notice Change the Global fee\\n/// @param \\_globalFee Fee in Basis Point\\nfunction setGlobalFee(uint256 \\_globalFee) external onlyAdmin {\\n if (\\_globalFee > StakingContractStorageLib.getGlobalCommissionLimit()) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setGlobalFee(\\_globalFee);\\n emit ChangedGlobalFee(\\_globalFee);\\n}\\n```\\n | The underlying issue is that users of the system can't be sure what the behavior of a function call will be, and this is because the behavior can change at any time.\\nWe recommend giving the user advance notice of changes with a time lock. For example, make all upgrades require two steps with a mandatory time window between them. The first step merely broadcasts to users that a particular change is coming, and the second step commits that change after a suitable waiting period. | null | ```\\n/// @notice Change the Operator fee\\n/// @param \\_operatorFee Fee in Basis Point\\nfunction setOperatorFee(uint256 \\_operatorFee) external onlyAdmin {\\n if (\\_operatorFee > StakingContractStorageLib.getOperatorCommissionLimit()) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setOperatorFee(\\_operatorFee);\\n emit ChangedOperatorFee(\\_operatorFee);\\n}\\n```\\n |
Potentially Uninitialized Implementations | medium | Most contracts in the system are meant to be used with a proxy pattern. First, the implementations are deployed, and then proxies are deployed that delegatecall into the respective implementations following an initialization call (hardhat, with same transaction). However, the implementations are initialized explicitly nor are they protected from other actors claiming/initializing them. This allows anyone to call initialization functions on implementations for use with phishing attacks (i.e. contract implementation addresses are typically listed on the official project website as valid contracts) which may affect the reputation of the system.\\nNone of the implementations allow unprotected delegatecalls or selfdesturcts. lowering the severity of this finding.\\n```\\nfunction initialize\\_1(\\n address \\_admin,\\n address \\_treasury,\\n address \\_depositContract,\\n address \\_elDispatcher,\\n address \\_clDispatcher,\\n address \\_feeRecipientImplementation,\\n uint256 \\_globalFee,\\n uint256 \\_operatorFee,\\n uint256 globalCommissionLimitBPS,\\n uint256 operatorCommissionLimitBPS\\n) external init(1) {\\n```\\n\\n```\\n/// @notice Initializes the receiver\\n/// @param \\_dispatcher Address that will handle the fee dispatching\\n/// @param \\_publicKeyRoot Public Key root assigned to this receiver\\nfunction init(address \\_dispatcher, bytes32 \\_publicKeyRoot) external {\\n if (initialized) {\\n revert AlreadyInitialized();\\n }\\n initialized = true;\\n dispatcher = IFeeDispatcher(\\_dispatcher);\\n publicKeyRoot = \\_publicKeyRoot;\\n stakingContract = msg.sender; // The staking contract always calls init\\n}\\n```\\n\\n```\\n/// @param \\_publicKeyRoot Public Key root assigned to this receiver\\nfunction init(address \\_dispatcher, bytes32 \\_publicKeyRoot) external {\\n if (initialized) {\\n revert AlreadyInitialized();\\n }\\n initialized = true;\\n dispatcher = IFeeDispatcher(\\_dispatcher);\\n publicKeyRoot = \\_publicKeyRoot;\\n}\\n```\\n | Petrify contracts in the constructor and disallow other actors from claiming/initializing the implementations. | null | ```\\nfunction initialize\\_1(\\n address \\_admin,\\n address \\_treasury,\\n address \\_depositContract,\\n address \\_elDispatcher,\\n address \\_clDispatcher,\\n address \\_feeRecipientImplementation,\\n uint256 \\_globalFee,\\n uint256 \\_operatorFee,\\n uint256 globalCommissionLimitBPS,\\n uint256 operatorCommissionLimitBPS\\n) external init(1) {\\n```\\n |
Operator May DoS the Withdrawal or Make It More Expensive | medium | While collecting fees, the operator may:\\ncause DoS for the funds/rewards withdrawal by reverting the call, thus reverting the whole transaction. By doing this, it won't be receiving any rewards, but so the treasury and withdrawer.\\nmake the withdrawal more expensive by sending a huge chunk of `returndata`. As the `returndata` is copied into memory in the caller's context, it will add an extra gas overhead for the withdrawer making it more expensive.\\nor mint gas token\\n```\\nif (operatorFee > 0) {\\n (status, data) = operator.call{value: operatorFee}("");\\n if (status == false) {\\n revert FeeRecipientReceiveError(data);\\n }\\n}\\n```\\n | A possible solution could be to make a low-level call in an inline assembly block, restricting the `returndata` to a couple of bytes, and instead of reverting on the failed call, emit an event, flagging the call that failed. | null | ```\\nif (operatorFee > 0) {\\n (status, data) = operator.call{value: operatorFee}("");\\n if (status == false) {\\n revert FeeRecipientReceiveError(data);\\n }\\n}\\n```\\n |
ConsensusLayerFeeDispatcher/ExecutionLayerFeeDispatcher - Should Hardcode autoPetrify With Highest Initializable Version Instead of User Provided Argument | low | The version to auto-initialize is not hardcoded with the constructor. On deployment, the deployer may accidentally use the wrong version, allowing anyone to call `initialize` on the contract.\\n```\\n/// @notice Constructor method allowing us to prevent calls to initCLFR by setting the appropriate version\\nconstructor(uint256 \\_version) {\\n VERSION\\_SLOT.setUint256(\\_version);\\n}\\n```\\n\\n```\\n/// @notice Constructor method allowing us to prevent calls to initCLFR by setting the appropriate version\\nconstructor(uint256 \\_version) {\\n VERSION\\_SLOT.setUint256(\\_version);\\n}\\n\\n/// @notice Initialize the contract by storing the staking contract and the public key in storage\\n/// @param \\_stakingContract Address of the Staking Contract\\nfunction initELD(address \\_stakingContract) external init(1) {\\n STAKING\\_CONTRACT\\_ADDRESS\\_SLOT.setAddress(\\_stakingContract);\\n}\\n```\\n | Similar to the `init(1)` modifier, it is suggested to track the highest version as a `const int` with the contract and auto-initialize to the highest version in the constructor instead of taking the highest version as a deployment argument. | null | ```\\n/// @notice Constructor method allowing us to prevent calls to initCLFR by setting the appropriate version\\nconstructor(uint256 \\_version) {\\n VERSION\\_SLOT.setUint256(\\_version);\\n}\\n```\\n |
StakingContract - Misleading Comment | low | The comment notes that the expected caller is `admin` while the modifier checks that `msg.sender` is an active operator.\\n```\\n/// @notice Ensures that the caller is the admin\\nmodifier onlyActiveOperator(uint256 \\_operatorIndex) {\\n \\_onlyActiveOperator(\\_operatorIndex);\\n \\_;\\n}\\n```\\n | Rectify the comment to accurately describe the intention of the method/modifier. | null | ```\\n/// @notice Ensures that the caller is the admin\\nmodifier onlyActiveOperator(uint256 \\_operatorIndex) {\\n \\_onlyActiveOperator(\\_operatorIndex);\\n \\_;\\n}\\n```\\n |
Impractical Checks for Global/Operator Fees and the Commission Limits | low | The contract initialization sets up the global and operator fees and also their commission limits. However, the checks just make sure that the fees or commission limit is up to 100% which is not a very practical check. Any unusual value, for instance, if set to 100% will mean the whole rewards/funds will be non-exempted and taxed as global fees, which we believe will never be a case practically.\\n```\\nif (\\_globalFee > BASIS\\_POINTS) {\\n revert InvalidFee();\\n}\\nStakingContractStorageLib.setGlobalFee(\\_globalFee);\\nif (\\_operatorFee > BASIS\\_POINTS) {\\n revert InvalidFee();\\n}\\nStakingContractStorageLib.setOperatorFee(\\_operatorFee);\\n```\\n\\n```\\nfunction initialize\\_2(uint256 globalCommissionLimitBPS, uint256 operatorCommissionLimitBPS) public init(2) {\\n if (globalCommissionLimitBPS > BASIS\\_POINTS) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setGlobalCommissionLimit(globalCommissionLimitBPS);\\n if (operatorCommissionLimitBPS > BASIS\\_POINTS) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setOperatorCommissionLimit(operatorCommissionLimitBPS);\\n}\\n```\\n\\n```\\nfunction setGlobalFee(uint256 \\_globalFee) external onlyAdmin {\\n if (\\_globalFee > StakingContractStorageLib.getGlobalCommissionLimit()) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setGlobalFee(\\_globalFee);\\n emit ChangedGlobalFee(\\_globalFee);\\n}\\n```\\n\\n```\\nfunction setOperatorFee(uint256 \\_operatorFee) external onlyAdmin {\\n if (\\_operatorFee > StakingContractStorageLib.getOperatorCommissionLimit()) {\\n revert InvalidFee();\\n }\\n StakingContractStorageLib.setOperatorFee(\\_operatorFee);\\n emit ChangedOperatorFee(\\_operatorFee);\\n}\\n```\\n | The fees should be checked with a more practical limit. For instance, checking against a min - max limit, like 20% - 40%. | null | ```\\nif (\\_globalFee > BASIS\\_POINTS) {\\n revert InvalidFee();\\n}\\nStakingContractStorageLib.setGlobalFee(\\_globalFee);\\nif (\\_operatorFee > BASIS\\_POINTS) {\\n revert InvalidFee();\\n}\\nStakingContractStorageLib.setOperatorFee(\\_operatorFee);\\n```\\n |
Contracts Should Inherit From Their Interfaces | low | The following contracts should enforce correct interface implementation by inheriting from the interface declarations.\\n```\\n/// @title Ethereum Staking Contract\\n/// @author Kiln\\n/// @notice You can use this contract to store validator keys and have users fund them and trigger deposits.\\ncontract StakingContract {\\n using StakingContractStorageLib for bytes32;\\n```\\n\\n```\\ninterface IStakingContractFeeDetails {\\n function getWithdrawerFromPublicKeyRoot(bytes32 \\_publicKeyRoot) external view returns (address);\\n\\n function getTreasury() external view returns (address);\\n\\n function getOperatorFeeRecipient(bytes32 pubKeyRoot) external view returns (address);\\n\\n function getGlobalFee() external view returns (uint256);\\n\\n function getOperatorFee() external view returns (uint256);\\n\\n function getExitRequestedFromRoot(bytes32 \\_publicKeyRoot) external view returns (bool);\\n\\n function getWithdrawnFromPublicKeyRoot(bytes32 \\_publicKeyRoot) external view returns (bool);\\n\\n function toggleWithdrawnFromPublicKeyRoot(bytes32 \\_publicKeyRoot) external;\\n}\\n```\\n\\n```\\ninterface IFeeRecipient {\\n function init(address \\_dispatcher, bytes32 \\_publicKeyRoot) external;\\n\\n function withdraw() external;\\n}\\n```\\n | Inherit from interface. | null | ```\\n/// @title Ethereum Staking Contract\\n/// @author Kiln\\n/// @notice You can use this contract to store validator keys and have users fund them and trigger deposits.\\ncontract StakingContract {\\n using StakingContractStorageLib for bytes32;\\n```\\n |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.