name
stringlengths
5
231
severity
stringclasses
3 values
description
stringlengths
107
68.2k
recommendation
stringlengths
12
8.75k
impact
stringlengths
3
11.2k
function
stringlengths
15
64.6k
Misleading Error Statements
low
The contracts define custom errors to revert transactions on failed operations or invalid input, however, they convey little to no information, making it difficult for the off-chain monitoring tools to track relevant updates.\\n```\\nerror Forbidden();\\nerror InvalidFee();\\nerror Deactivated();\\nerror NoOperators();\\nerror InvalidCall();\\nerror Unauthorized();\\nerror DepositFailure();\\nerror DepositsStopped();\\nerror InvalidArgument();\\nerror UnsortedIndexes();\\nerror InvalidPublicKeys();\\nerror InvalidSignatures();\\nerror InvalidWithdrawer();\\nerror InvalidZeroAddress();\\nerror AlreadyInitialized();\\nerror InvalidDepositValue();\\nerror NotEnoughValidators();\\nerror InvalidValidatorCount();\\nerror DuplicateValidatorKey(bytes);\\nerror FundedValidatorDeletionAttempt();\\nerror OperatorLimitTooHigh(uint256 limit, uint256 keyCount);\\nerror MaximumOperatorCountAlreadyReached();\\nerror LastEditAfterSnapshot();\\nerror PublicKeyNotInContract();\\n```\\n\\nFor instance, the `init` modifier is used to initialize the contracts with the current Version. The Version initialization ensures that the provided version must be an increment of the previous version, if not, it reverts with an error as `AlreadyInitialized()`. However, the error doesn't convey an appropriate message correctly, as any version other than the expected version will signify that the version has already been initialized.\\n```\\nmodifier init(uint256 \\_version) {\\n if (\\_version != VERSION\\_SLOT.getUint256() + 1) {\\n revert AlreadyInitialized();\\n }\\n```\\n\\n```\\nmodifier init(uint256 \\_version) {\\n if (\\_version != VERSION\\_SLOT.getUint256() + 1) {\\n revert AlreadyInitialized();\\n }\\n```\\n\\n```\\nmodifier init(uint256 \\_version) {\\n if (\\_version != StakingContractStorageLib.getVersion() + 1) {\\n revert AlreadyInitialized();\\n }\\n```\\n
Use a more meaningful statement with enough information to track off-chain for all the custom errors in every contract in scope. For instance, add the current and supplied versions as indexed parameters, like: IncorrectVersionInitialization(current version, supplied version);\\nAlso, the function can be simplified as\\n```\\n function initELD(address \\_stakingContract) external init(VERSION\\_SLOT.getUint256() + 1) {\\n STAKING\\_CONTRACT\\_ADDRESS\\_SLOT.setAddress(\\_stakingContract);\\n }\\n```\\n
null
```\\nerror Forbidden();\\nerror InvalidFee();\\nerror Deactivated();\\nerror NoOperators();\\nerror InvalidCall();\\nerror Unauthorized();\\nerror DepositFailure();\\nerror DepositsStopped();\\nerror InvalidArgument();\\nerror UnsortedIndexes();\\nerror InvalidPublicKeys();\\nerror InvalidSignatures();\\nerror InvalidWithdrawer();\\nerror InvalidZeroAddress();\\nerror AlreadyInitialized();\\nerror InvalidDepositValue();\\nerror NotEnoughValidators();\\nerror InvalidValidatorCount();\\nerror DuplicateValidatorKey(bytes);\\nerror FundedValidatorDeletionAttempt();\\nerror OperatorLimitTooHigh(uint256 limit, uint256 keyCount);\\nerror MaximumOperatorCountAlreadyReached();\\nerror LastEditAfterSnapshot();\\nerror PublicKeyNotInContract();\\n```\\n
Architectural Pattern of Internal and External Functions Increases Attack Surface
low
There is an architectural pattern throughout the code of functions being defined in two places: an external wrapper (name) that verifies authorization and validates parameters, and an internal function (_name) that contains the implementation logic. This pattern separates concerns and avoids redundancy in the case that more than one external function reuses the same internal logic.\\nFor example, `VotingTokenLockupPlans.setupVoting` calls an internal function `_setupVoting` and sets the `holder` parameter to `msg.sender`.\\n```\\nfunction setupVoting(uint256 planId) external nonReentrant returns (address votingVault) {\\n votingVault = \\_setupVoting(msg.sender, planId);\\n```\\n\\n```\\nfunction \\_setupVoting(address holder, uint256 planId) internal returns (address) {\\n require(ownerOf(planId) == holder, '!owner');\\n```\\n\\nIn this case, however, there is no case in which `holder` should not be set to `msg.sender`. Because the internal function doesn't enforce this, it's theoretically possible that if another internal (or derived) function were compromised then it could call `_setupVoting` with `holder` set to `ownerOf(planId)`, even if `msg.sender` isn't the owner. This increases the attack surface through providing unneeded flexibility.\\nOther Examples\\n```\\nfunction segmentPlan(\\n uint256 planId,\\n uint256[] memory segmentAmounts\\n) external nonReentrant returns (uint256[] memory newPlanIds) {\\n newPlanIds = new uint256[](segmentAmounts.length);\\n for (uint256 i; i < segmentAmounts.length; i++) {\\n uint256 newPlanId = \\_segmentPlan(msg.sender, planId, segmentAmounts[i]);\\n```\\n\\n```\\nfunction \\_segmentPlan(address holder, uint256 planId, uint256 segmentAmount) internal returns (uint256 newPlanId) {\\n require(ownerOf(planId) == holder, '!owner');\\n```\\n\\n```\\nfunction revokePlans(uint256[] memory planIds) external nonReentrant {\\n for (uint256 i; i < planIds.length; i++) {\\n \\_revokePlan(msg.sender, planIds[i]);\\n```\\n\\n```\\nfunction \\_revokePlan(address vestingAdmin, uint256 planId) internal {\\n Plan memory plan = plans[planId];\\n require(vestingAdmin == plan.vestingAdmin, '!vestingAdmin');\\n```\\n
Resolution\\nFixed as of commit `f4299cdba5e863c9ca2d69a3a7dd554ac34af292`.\\nTo reduce the attack surface, consider hard coding parameters such as `holder` to `msg.sender` in internal functions when extra flexibility isn't needed.
null
```\\nfunction setupVoting(uint256 planId) external nonReentrant returns (address votingVault) {\\n votingVault = \\_setupVoting(msg.sender, planId);\\n```\\n
Revoking Vesting Will Trigger a Taxable Event
low
Resolution\\nFixed as of commit `f4299cdba5e863c9ca2d69a3a7dd554ac34af292`.\\nFrom the previous conversations with the Hedgey team, we identified that users should be in control of when taxable events happen. For that reason, one could redeem a plan in the past. Unfortunately, the recipient of the vesting plan can not always be in control of the redemption process. If for one reason or another the administrator of the vesting plan decides to revoke it, any vested funds will be sent to the vesting plan holder, triggering the taxable event and burning the NFT.\\n```\\nfunction \\_revokePlan(address vestingAdmin, uint256 planId) internal {\\n Plan memory plan = plans[planId];\\n require(vestingAdmin == plan.vestingAdmin, '!vestingAdmin');\\n (uint256 balance, uint256 remainder, ) = planBalanceOf(planId, block.timestamp, block.timestamp);\\n require(remainder > 0, '!Remainder');\\n address holder = ownerOf(planId);\\n delete plans[planId];\\n \\_burn(planId);\\n TransferHelper.withdrawTokens(plan.token, vestingAdmin, remainder);\\n TransferHelper.withdrawTokens(plan.token, holder, balance);\\n emit PlanRevoked(planId, balance, remainder);\\n}\\n```\\n\\n```\\nfunction \\_revokePlan(address vestingAdmin, uint256 planId) internal {\\n Plan memory plan = plans[planId];\\n require(vestingAdmin == plan.vestingAdmin, '!vestingAdmin');\\n (uint256 balance, uint256 remainder, ) = planBalanceOf(planId, block.timestamp, block.timestamp);\\n require(remainder > 0, '!Remainder');\\n address holder = ownerOf(planId);\\n delete plans[planId];\\n \\_burn(planId);\\n address vault = votingVaults[planId];\\n if (vault == address(0)) {\\n TransferHelper.withdrawTokens(plan.token, vestingAdmin, remainder);\\n TransferHelper.withdrawTokens(plan.token, holder, balance);\\n } else {\\n delete votingVaults[planId];\\n VotingVault(vault).withdrawTokens(vestingAdmin, remainder);\\n VotingVault(vault).withdrawTokens(holder, balance);\\n }\\n emit PlanRevoked(planId, balance, remainder);\\n}\\n```\\n
One potential workaround is to only withdraw the unvested portion to the vesting admin while keeping the vested part in the contract. That being said `amount` and `rate` variables would need to be updated in order not to allow any additional vesting for the given plan. This way plan holders will not be entitled to more funds but will be able to redeem them at the time they choose.
null
```\\nfunction \\_revokePlan(address vestingAdmin, uint256 planId) internal {\\n Plan memory plan = plans[planId];\\n require(vestingAdmin == plan.vestingAdmin, '!vestingAdmin');\\n (uint256 balance, uint256 remainder, ) = planBalanceOf(planId, block.timestamp, block.timestamp);\\n require(remainder > 0, '!Remainder');\\n address holder = ownerOf(planId);\\n delete plans[planId];\\n \\_burn(planId);\\n TransferHelper.withdrawTokens(plan.token, vestingAdmin, remainder);\\n TransferHelper.withdrawTokens(plan.token, holder, balance);\\n emit PlanRevoked(planId, balance, remainder);\\n}\\n```\\n
Use of selfdestruct Deprecated in VotingVault
low
The `VotingVault.withdrawTokens` function invokes the `selfdestruct` operation when the vault is empty so that it can't be used again.\\nThe use ofselfdestruct has been deprecated and a breaking change in its future behavior is expected.\\n```\\nfunction withdrawTokens(address to, uint256 amount) external onlyController {\\n TransferHelper.withdrawTokens(token, to, amount);\\n if (IERC20(token).balanceOf(address(this)) == 0) selfdestruct;\\n}\\n```\\n
Remove the line that invokes `selfdestruct` and consider changing internal state so that future calls to `delegateTokens` always revert.
null
```\\nfunction withdrawTokens(address to, uint256 amount) external onlyController {\\n TransferHelper.withdrawTokens(token, to, amount);\\n if (IERC20(token).balanceOf(address(this)) == 0) selfdestruct;\\n}\\n```\\n
Balance of msg.sender Is Used Instead of the from Address
low
The `TransferHelper` library has methods that allow transferring tokens directly or on behalf of a different wallet that previously approved the transfer. Those functions also check the sender balance before conducting the transfer. In the second case, where the transfer happens on behalf of someone the code is checking not the actual token spender balance, but the `msg.sender` balance instead.\\n```\\nfunction transferTokens(\\n address token,\\n address from,\\n address to,\\n uint256 amount\\n) internal {\\n uint256 priorBalance = IERC20(token).balanceOf(address(to));\\n require(IERC20(token).balanceOf(msg.sender) >= amount, 'THL01');\\n```\\n
Use the `from` parameter instead of `msg.sender`.
null
```\\nfunction transferTokens(\\n address token,\\n address from,\\n address to,\\n uint256 amount\\n) internal {\\n uint256 priorBalance = IERC20(token).balanceOf(address(to));\\n require(IERC20(token).balanceOf(msg.sender) >= amount, 'THL01');\\n```\\n
Bridge Token Would Be Locked and Cannot Bridge to Native Token
high
If the bridge token B of a native token A is already deployed and `confirmDeployment` is called on the other layer and `setDeployed` sets A's `nativeToBridgedToken` value to `DEPLOYED_STATUS`. The bridge token B cannot bridge to native token A in `completeBridging` function, because A's `nativeToBridgedToken` value is not `NATIVE_STATUS`, as a result the native token won't be transferred to the receiver. User's bridge token will be locked in the original layer\\n```\\nif (nativeMappingValue == NATIVE\\_STATUS) {\\n // Token is native on the local chain\\n IERC20(\\_nativeToken).safeTransfer(\\_recipient, \\_amount);\\n} else {\\n bridgedToken = nativeMappingValue;\\n if (nativeMappingValue == EMPTY) {\\n // New token\\n bridgedToken = deployBridgedToken(\\_nativeToken, \\_tokenMetadata);\\n bridgedToNativeToken[bridgedToken] = \\_nativeToken;\\n nativeToBridgedToken[\\_nativeToken] = bridgedToken;\\n }\\n BridgedToken(bridgedToken).mint(\\_recipient, \\_amount);\\n}\\n```\\n\\n```\\nfunction setDeployed(address[] memory \\_nativeTokens) external onlyMessagingService fromRemoteTokenBridge {\\n address nativeToken;\\n for (uint256 i; i < \\_nativeTokens.length; i++) {\\n nativeToken = \\_nativeTokens[i];\\n nativeToBridgedToken[\\_nativeTokens[i]] = DEPLOYED\\_STATUS;\\n emit TokenDeployed(\\_nativeTokens[i]);\\n }\\n}\\n```\\n
Add an condition `nativeMappingValue` = `DEPLOYED_STATUS` for native token transfer in `confirmDeployment`\\n```\\nif (nativeMappingValue == NATIVE_STATUS || nativeMappingValue == DEPLOYED_STATUS) {\\n IERC20(_nativeToken).safeTransfer(_recipient, _amount);\\n```\\n
null
```\\nif (nativeMappingValue == NATIVE\\_STATUS) {\\n // Token is native on the local chain\\n IERC20(\\_nativeToken).safeTransfer(\\_recipient, \\_amount);\\n} else {\\n bridgedToken = nativeMappingValue;\\n if (nativeMappingValue == EMPTY) {\\n // New token\\n bridgedToken = deployBridgedToken(\\_nativeToken, \\_tokenMetadata);\\n bridgedToNativeToken[bridgedToken] = \\_nativeToken;\\n nativeToBridgedToken[\\_nativeToken] = bridgedToken;\\n }\\n BridgedToken(bridgedToken).mint(\\_recipient, \\_amount);\\n}\\n```\\n
User Cannot Withdraw Funds if Bridging Failed or Delayed Won't Fix
high
If the bridging failed due to the single coordinator is down, censoring the message, or bridge token contract is set to a bad or wrong contract address by `setCustomContract`, user's funds will stuck in the `TokenBridge` contract until coordinator is online or stop censoring, there is no way to withdraw the deposited funds\\n```\\nfunction setCustomContract(\\n address \\_nativeToken,\\n address \\_targetContract\\n) external onlyOwner isNewToken(\\_nativeToken) {\\n nativeToBridgedToken[\\_nativeToken] = \\_targetContract;\\n bridgedToNativeToken[\\_targetContract] = \\_nativeToken;\\n emit CustomContractSet(\\_nativeToken, \\_targetContract);\\n}\\n```\\n
Add withdraw functionality to let user withdraw the funds under above circumstances or at least add withdraw functionality for Admin (admin can send the funds to the user manually), ultimately decentralize coordinator and sequencer to reduce bridging failure risk.
null
```\\nfunction setCustomContract(\\n address \\_nativeToken,\\n address \\_targetContract\\n) external onlyOwner isNewToken(\\_nativeToken) {\\n nativeToBridgedToken[\\_nativeToken] = \\_targetContract;\\n bridgedToNativeToken[\\_targetContract] = \\_nativeToken;\\n emit CustomContractSet(\\_nativeToken, \\_targetContract);\\n}\\n```\\n
Bridges Don't Support Multiple Native Tokens, Which May Lead to Incorrect Bridging
high
Currently, the system design does not support the scenarios where native tokens with the same addresses (which is possible with the same deployer and nonce) on different layers can be bridged.\\nFor instance, Let's consider, there is a native token `A` on `L1` which has already been bridged on `L2`. If anyone tries to bridge native token `B` on `L2` with the same address as token `A` , instead of creating a new bridge on `L1` and minting new tokens, the token bridge will transfer native token `A` on `L1` to the `_recipient` which is incorrect.\\nThe reason is the mappings don't differentiate between the native tokens on two different Layers.\\n```\\n mapping(address => address) public nativeToBridgedToken;\\n mapping(address => address) public bridgedToNativeToken;\\n```\\n\\n```\\nfunction completeBridging(\\n address \\_nativeToken,\\n uint256 \\_amount,\\n address \\_recipient,\\n bytes calldata \\_tokenMetadata\\n) external onlyMessagingService fromRemoteTokenBridge {\\n address nativeMappingValue = nativeToBridgedToken[\\_nativeToken];\\n address bridgedToken;\\n\\n if (nativeMappingValue == NATIVE\\_STATUS) {\\n // Token is native on the local chain\\n IERC20(\\_nativeToken).safeTransfer(\\_recipient, \\_amount);\\n } else {\\n```\\n
Redesign the approach to handle the same native tokens on different layers. One possible approach could be to define the set of mappings for each layer.
null
```\\n mapping(address => address) public nativeToBridgedToken;\\n mapping(address => address) public bridgedToNativeToken;\\n```\\n
No Check for Initializing Parameters of TokenBridge
high
In `TokenBridge` contract's `initialize` function, there is no check for initializing parameters including `_securityCouncil`, `_messageService`, `_tokenBeacon` and `_reservedTokens`. If any of these address is set to 0 or other invalid value, `TokenBridge` would not work, user may lose funds.\\n```\\nfunction initialize(\\n address \\_securityCouncil,\\n address \\_messageService,\\n address \\_tokenBeacon,\\n address[] calldata \\_reservedTokens\\n) external initializer {\\n \\_\\_Pausable\\_init();\\n \\_\\_Ownable\\_init();\\n setMessageService(\\_messageService);\\n tokenBeacon = \\_tokenBeacon;\\n for (uint256 i = 0; i < \\_reservedTokens.length; i++) {\\n setReserved(\\_reservedTokens[i]);\\n }\\n \\_transferOwnership(\\_securityCouncil);\\n}\\n```\\n
Add non-zero address check for `_securityCouncil`, `_messageService`, `_tokenBeacon` and `_reservedTokens`
null
```\\nfunction initialize(\\n address \\_securityCouncil,\\n address \\_messageService,\\n address \\_tokenBeacon,\\n address[] calldata \\_reservedTokens\\n) external initializer {\\n \\_\\_Pausable\\_init();\\n \\_\\_Ownable\\_init();\\n setMessageService(\\_messageService);\\n tokenBeacon = \\_tokenBeacon;\\n for (uint256 i = 0; i < \\_reservedTokens.length; i++) {\\n setReserved(\\_reservedTokens[i]);\\n }\\n \\_transferOwnership(\\_securityCouncil);\\n}\\n```\\n
Owner Can Update Arbitrary Status for New Native Token Without Confirmation
high
The function `setCustomContract` allows the owner to update arbitrary status for new native tokens without confirmation, bypassing the bridge protocol.\\nIt can set `DEPLOYED_STATUS` for a new native token, even if there exists no bridged token for it.\\nIt can set `NATIVE_STATUS` for a new native token even if it's not.\\nIt can set `RESERVED_STATUS` disallowing any new native token to be bridged.\\n```\\nfunction setCustomContract(\\n address \\_nativeToken,\\n address \\_targetContract\\n) external onlyOwner isNewToken(\\_nativeToken) {\\n nativeToBridgedToken[\\_nativeToken] = \\_targetContract;\\n bridgedToNativeToken[\\_targetContract] = \\_nativeToken;\\n emit CustomContractSet(\\_nativeToken, \\_targetContract);\\n}\\n```\\n
The function should not allow `_targetContract` to be any state code
null
```\\nfunction setCustomContract(\\n address \\_nativeToken,\\n address \\_targetContract\\n) external onlyOwner isNewToken(\\_nativeToken) {\\n nativeToBridgedToken[\\_nativeToken] = \\_targetContract;\\n bridgedToNativeToken[\\_targetContract] = \\_nativeToken;\\n emit CustomContractSet(\\_nativeToken, \\_targetContract);\\n}\\n```\\n
Owner May Exploit Bridged Tokens
high
The function `setCustomContract` allows the owner, to define a custom ERC20 contract for the native token. However, it doesn't check whether the target contract has already been defined as a bridge to a native token or not. As a result, the owner may take advantage of the design flaw and bridge another new native token that has not been bridged yet, to an already existing target(already a bridge for another native token). Now, if a user tries to bridge this native token, the token bridge on the source chain will take the user's tokens, and instead of deploying a new bridge on the destination chain, tokens will be minted to the `_recipient` on an existing bridge defined by the owner, or it can be any random EOA address to create a DoS.\\nThe owner can also try to front-run calls to `completeBridging` for new Native Tokens on the destination chain, by setting a different bridge via `setCustomContract`. Although, the team states that the role will be controlled by a multi-sig which makes frontrunning less likely to happen.\\n```\\nfunction setCustomContract(\\n address \\_nativeToken,\\n address \\_targetContract\\n) external onlyOwner isNewToken(\\_nativeToken) {\\n nativeToBridgedToken[\\_nativeToken] = \\_targetContract;\\n bridgedToNativeToken[\\_targetContract] = \\_nativeToken;\\n emit CustomContractSet(\\_nativeToken, \\_targetContract);\\n}\\n```\\n\\n```\\n} else {\\n bridgedToken = nativeMappingValue;\\n if (nativeMappingValue == EMPTY) {\\n // New token\\n bridgedToken = deployBridgedToken(\\_nativeToken, \\_tokenMetadata);\\n bridgedToNativeToken[bridgedToken] = \\_nativeToken;\\n nativeToBridgedToken[\\_nativeToken] = bridgedToken;\\n }\\n BridgedToken(bridgedToken).mint(\\_recipient, \\_amount);\\n}\\n```\\n
Make sure, a native token should bridge to a single target contract. A possible approach could be to check whether the `bridgedToNativeToken` for a target is `EMPTY` or not. If it's not `EMPTY`, it means it's already a bridge for a native token and the function should revert. The same can be achieved by adding the modifier `isNewToken(_targetContract)`.\\nNote:- However, it doesn't resolve the issue of frontrunning, even if the likelihood is less.
null
```\\nfunction setCustomContract(\\n address \\_nativeToken,\\n address \\_targetContract\\n) external onlyOwner isNewToken(\\_nativeToken) {\\n nativeToBridgedToken[\\_nativeToken] = \\_targetContract;\\n bridgedToNativeToken[\\_targetContract] = \\_nativeToken;\\n emit CustomContractSet(\\_nativeToken, \\_targetContract);\\n}\\n```\\n
Updating Message Service Does Not Emit Event
medium
Resolution\\nThe recommendations are implemented by the Linea team in the pull request 69 with the final commit hash as `1fdd5cfc51c421ad9aaf8b2fd2b3e2ed86ffa898`\\nThe function `setMessageService` allows the owner to update the message service address. However, it does not emit any event reflecting the change. As a result, in case the owner gets compromised, it can silently add a malicious message service, exploiting users' funds. Since, there was no event emitted, off-chain monitoring tools wouldn't be able to trigger alarms and users would continue using rogue message service until and unless tracked manually.\\n```\\nfunction setMessageService(address \\_messageService) public onlyOwner {\\n messageService = IMessageService(\\_messageService);\\n}\\n```\\n
Consider emitting an event reflecting the update from the old message service to the new one.
null
```\\nfunction setMessageService(address \\_messageService) public onlyOwner {\\n messageService = IMessageService(\\_messageService);\\n}\\n```\\n
Lock Solidity Version in pragma
low
Contracts should be deployed with the same compiler version they have been tested with. Locking the pragma helps ensure that contracts do not accidentally get deployed using, for example, the latest compiler which may have higher risks of undiscovered bugs. Contracts may also be deployed by others and the pragma indicates the compiler version intended by the original authors.\\nSee Locking Pragmas in Ethereum Smart Contract Best Practices.\\n```\\npragma solidity ^0.8.19;\\n```\\n\\n```\\npragma solidity ^0.8.19;\\n```\\n\\n```\\npragma solidity ^0.8.19;\\n```\\n\\n```\\npragma solidity ^0.8.19;\\n```\\n
Lock the Solidity version to the latest version before deploying the contracts to production.\\n```\\npragma solidity 0.8.19;\\n```\\n
null
```\\npragma solidity ^0.8.19;\\n```\\n
TokenBridge Does Not Follow a 2-Step Approach for Ownership Transfers
low
Resolution\\nThe recommendations are implemented by the Linea team in the pull request 71 with the final commit hash as `8ebfd011675ea318b7067af52637192aa1126acd`\\n`TokenBridge` defines a privileged role Owner, however, it uses a single-step approach, which immediately transfers the ownership to the new address. If accidentally passed an incorrect address, the current owner will immediately lose control over the system as there is no fail-safe mechanism.\\nA safer approach would be to first propose the ownership to the new owner, and let the new owner accept the proposal to be the new owner. It will add a fail-safe mechanism for the current owner as in case it proposes ownership to an incorrect address, it will not immediately lose control, and may still propose again to a correct address.\\n```\\ncontract TokenBridge is ITokenBridge, PausableUpgradeable, OwnableUpgradeable {\\n```\\n
Consider moving to a 2-step approach for the ownership transfers as recommended above. Note:- Openzeppelin provides another helper utility as Ownable2StepUpgradeable which follows the recommended approach
null
```\\ncontract TokenBridge is ITokenBridge, PausableUpgradeable, OwnableUpgradeable {\\n```\\n
Heavy Blocks May Affect Block Finalization, if the Gas Requirement Exceeds Block Gas Limit
high
The `sequencer` takes care of finalizing blocks by submitting proof, blocks' data, proof type, and parent state root hash. The team mentions that the blocks are finalized every 12s, and under general scenarios, the system will work fine. However, in cases where there are blocks containing lots of transactions and event logs, the function may require gas more than the block gas limit. As a consequence, it may affect block finalization or lead to a potential DoS.\\n```\\nfunction finalizeBlocks(\\n BlockData[] calldata \\_blocksData,\\n bytes calldata \\_proof,\\n uint256 \\_proofType,\\n bytes32 \\_parentStateRootHash\\n)\\n```\\n
We advise the team to benchmark the cost associated per block for the finalization and how many blocks can be finalized in one rollup and add the limits accordingly for the prover/sequencer.
null
```\\nfunction finalizeBlocks(\\n BlockData[] calldata \\_blocksData,\\n bytes calldata \\_proof,\\n uint256 \\_proofType,\\n bytes32 \\_parentStateRootHash\\n)\\n```\\n
Postman Can Incorrectly Deliver a Message While Still Collecting the Fees
high
The message service allows cross chain message delivery, where the user can define the parameters of the message as:\\nfrom: Sender of the message _to: Receiver of the message _fee: The fees, the sender wants to pay to the postman to deliver the message valueSent: The value in the native currency of the chain to be sent with the message messageNumber: Nonce value which increments for every message _calldata: Calldata for the message to be executed on the destination chain\\nThe postman estimates the gas before claiming/delivering the message on the destination chain, thus avoiding scenarios where the fees sent are less than the cost of claiming the message.\\nHowever, there is nothing that restricts the postman from sending the gas equal to the fees paid by the user. Although it contributes to the MEV, where the postman can select the messages with higher fees first and deliver them prior to others, it also opens up an opportunity where the postman can deliver a message incorrectly while still claiming the fees.\\nOne such scenario is, where the low-level call to target `_to` makes another sub-call to another address, let's say `x`. Let's assume, the `_to` address doesn't check, whether the call to address `x` was successful or not. Now, if the postman supplies a gas, which makes the top-level call succeed, but the low-level call to `x` fails silently, the postman will still be retrieving the fees of claiming the message, even though the message was not correctly delivered.\\n```\\n(bool success, bytes memory returnData) = \\_to.call{ value: \\_value }(\\_calldata);\\nif (!success) {\\n if (returnData.length > 0) {\\n assembly {\\n let data\\_size := mload(returnData)\\n revert(add(32, returnData), data\\_size)\\n }\\n } else {\\n revert MessageSendingFailed(\\_to);\\n }\\n}\\n```\\n\\n```\\n(bool success, bytes memory returnData) = \\_to.call{ value: \\_value }(\\_calldata);\\nif (!success) {\\n if (returnData.length > 0) {\\n assembly {\\n let data\\_size := mload(returnData)\\n revert(add(32, returnData), data\\_size)\\n }\\n } else {\\n revert MessageSendingFailed(\\_to);\\n }\\n}\\n```\\n
Another parameter can be added to the message construct giving the user the option to define the amount of gas required to complete a transaction entirely. Also, a check can be added while claiming the message, to make sure the gas supplied by the postman is sufficient enough compared to the gas defined/demanded by the user. The cases, where the user can demand a huge amount of gas, can be simply avoided by doing the gas estimation, and if the demanded gas is more than the supplied fees, the postman will simply opt not to deliver the message
null
```\\n(bool success, bytes memory returnData) = \\_to.call{ value: \\_value }(\\_calldata);\\nif (!success) {\\n if (returnData.length > 0) {\\n assembly {\\n let data\\_size := mload(returnData)\\n revert(add(32, returnData), data\\_size)\\n }\\n } else {\\n revert MessageSendingFailed(\\_to);\\n }\\n}\\n```\\n
User's Funds Would Stuck if the Message Claim Failed on the Destination Layer
high
When claiming the message on the destination layer, if the message failed to execute with various reasons (e.g. wrong target contract address, wrong contract logic, out of gas, malicious contract), the Ether sent with `sendMessage` on the original layer will be stuck, although the message can be retried later by the Postman or the user (could fail again)\\n```\\nuint256 messageNumber = nextMessageNumber;\\nuint256 valueSent = msg.value - \\_fee;\\n\\nbytes32 messageHash = keccak256(abi.encode(msg.sender, \\_to, \\_fee, valueSent, messageNumber, \\_calldata));\\n```\\n\\n```\\n(bool success, bytes memory returnData) = \\_to.call{ value: \\_value }(\\_calldata);\\nif (!success) {\\n if (returnData.length > 0) {\\n assembly {\\n let data\\_size := mload(returnData)\\n revert(add(32, returnData), data\\_size)\\n }\\n } else {\\n revert MessageSendingFailed(\\_to);\\n }\\n}\\n```\\n\\n```\\n(bool success, bytes memory returnData) = \\_to.call{ value: \\_value }(\\_calldata);\\nif (!success) {\\n if (returnData.length > 0) {\\n assembly {\\n let data\\_size := mload(returnData)\\n revert(add(32, returnData), data\\_size)\\n }\\n } else {\\n revert MessageSendingFailed(\\_to);\\n }\\n}\\n```\\n
Add refund mechanism to refund users funds if the message failed to deliver on the destination layer
null
```\\nuint256 messageNumber = nextMessageNumber;\\nuint256 valueSent = msg.value - \\_fee;\\n\\nbytes32 messageHash = keccak256(abi.encode(msg.sender, \\_to, \\_fee, valueSent, messageNumber, \\_calldata));\\n```\\n
Front Running finalizeBlocks When Sequencers Are Decentralized
high
When sequencer is decentralized in the future, one sequencer could front run another sequencer's `finalizeBlocks` transaction, without doing the actual proving and sequencing, and steal the reward for sequencing if there is one. Once the frontrunner's `finalizeBlocks` is executed, the original sequencer's transaction would fail as `currentL2BlockNumber` would increment by one and state root hash won't match, as a result the original sequencer's sequencing and proving work will be wasted.\\n```\\nfunction finalizeBlocks(\\n BlockData[] calldata \\_blocksData,\\n bytes calldata \\_proof,\\n uint256 \\_proofType,\\n bytes32 \\_parentStateRootHash\\n)\\n external\\n whenTypeNotPaused(PROVING\\_SYSTEM\\_PAUSE\\_TYPE)\\n whenTypeNotPaused(GENERAL\\_PAUSE\\_TYPE)\\n onlyRole(OPERATOR\\_ROLE)\\n{\\n if (stateRootHashes[currentL2BlockNumber] != \\_parentStateRootHash) {\\n revert StartingRootHashDoesNotMatch();\\n }\\n\\n \\_finalizeBlocks(\\_blocksData, \\_proof, \\_proofType, \\_parentStateRootHash, true);\\n}\\n```\\n
Add the sequencer's address as one parameters in `_finalizeBlocks` function, and include the sequencer's address in the public input hash of the proof in verification function `_verifyProof`.\\n```\\nfunction _finalizeBlocks(\\n BlockData[] calldata _blocksData,\\n bytes memory _proof,\\n uint256 _proofType,\\n bytes32 _parentStateRootHash,\\n bool _shouldProve,\\n address _sequencer\\n )\\n```\\n\\n```\\n_verifyProof(\\n uint256(\\n keccak256(\\n abi.encode(\\n keccak256(abi.encodePacked(blockHashes)),\\n firstBlockNumber,\\n keccak256(abi.encodePacked(timestampHashes)),\\n keccak256(abi.encodePacked(hashOfRootHashes)),\\n keccak256(abi.encodePacked(_sequencer)\\n )\\n )\\n ) % MODULO_R,\\n _proofType,\\n _proof,\\n _parentStateRootHash\\n );\\n```\\n
null
```\\nfunction finalizeBlocks(\\n BlockData[] calldata \\_blocksData,\\n bytes calldata \\_proof,\\n uint256 \\_proofType,\\n bytes32 \\_parentStateRootHash\\n)\\n external\\n whenTypeNotPaused(PROVING\\_SYSTEM\\_PAUSE\\_TYPE)\\n whenTypeNotPaused(GENERAL\\_PAUSE\\_TYPE)\\n onlyRole(OPERATOR\\_ROLE)\\n{\\n if (stateRootHashes[currentL2BlockNumber] != \\_parentStateRootHash) {\\n revert StartingRootHashDoesNotMatch();\\n }\\n\\n \\_finalizeBlocks(\\_blocksData, \\_proof, \\_proofType, \\_parentStateRootHash, true);\\n}\\n```\\n
User Funds Would Stuck if the Single Coordinator Is Offline or Censoring Messages
high
When user sends message from L1 to L2, the coordinator needs to post the messages to L2, this happens in the anchoring message(addL1L2MessageHashes) on L2, then the user or Postman can claim the message on L2. since there is only a single coordinator, if the coordinator is down or censoring messages sent from L1 to L2, users funds can stuck in L1, until the coordinator come back online or stops censoring the message, as there is no message cancel feature or message expire feature. Although the operator can pause message sending on L1 once the coordinator is down, but if the message is sent and not posted to L2 before the pause it will still stuck.\\n```\\nuint256 messageNumber = nextMessageNumber;\\nuint256 valueSent = msg.value - \\_fee;\\n\\nbytes32 messageHash = keccak256(abi.encode(msg.sender, \\_to, \\_fee, valueSent, messageNumber, \\_calldata));\\n```\\n\\n```\\nfunction addL1L2MessageHashes(bytes32[] calldata \\_messageHashes) external onlyRole(L1\\_L2\\_MESSAGE\\_SETTER\\_ROLE) {\\n uint256 messageHashesLength = \\_messageHashes.length;\\n\\n if (messageHashesLength > 100) {\\n revert MessageHashesListLengthHigherThanOneHundred(messageHashesLength);\\n }\\n\\n for (uint256 i; i < messageHashesLength; ) {\\n bytes32 messageHash = \\_messageHashes[i];\\n if (inboxL1L2MessageStatus[messageHash] == INBOX\\_STATUS\\_UNKNOWN) {\\n inboxL1L2MessageStatus[messageHash] = INBOX\\_STATUS\\_RECEIVED;\\n }\\n unchecked {\\n i++;\\n }\\n }\\n\\n emit L1L2MessageHashesAddedToInbox(\\_messageHashes);\\n}\\n```\\n
Decentralize coordinator and sequencer or enable user cancel or drop the message if message deadline has expired.
null
```\\nuint256 messageNumber = nextMessageNumber;\\nuint256 valueSent = msg.value - \\_fee;\\n\\nbytes32 messageHash = keccak256(abi.encode(msg.sender, \\_to, \\_fee, valueSent, messageNumber, \\_calldata));\\n```\\n
Changing Verifier Address Doesn't Emit Event
high
In function `setVerifierAddress`, after the verifier address is changed, there is no event emitted, which means if the operator (security council) changes the verifier to a buggy verifier, or if the security council is compromised, the attacker can change the verifier to a malicious one, the unsuspecting user would still use the service, potentially lose funds due to the fraud transactions would be verified.\\n```\\nfunction setVerifierAddress(address \\_newVerifierAddress, uint256 \\_proofType) external onlyRole(DEFAULT\\_ADMIN\\_ROLE) {\\n if (\\_newVerifierAddress == address(0)) {\\n revert ZeroAddressNotAllowed();\\n }\\n verifiers[\\_proofType] = \\_newVerifierAddress;\\n}\\n```\\n
Emits event after changing verifier address including old verifier address, new verifier address and the caller account
null
```\\nfunction setVerifierAddress(address \\_newVerifierAddress, uint256 \\_proofType) external onlyRole(DEFAULT\\_ADMIN\\_ROLE) {\\n if (\\_newVerifierAddress == address(0)) {\\n revert ZeroAddressNotAllowed();\\n }\\n verifiers[\\_proofType] = \\_newVerifierAddress;\\n}\\n```\\n
L2 Blocks With Incorrect Timestamp Could Be Finalized
medium
In `_finalizeBlocks` of `ZkEvmV2`, the current block timestamp `blockInfo.l2BlockTimestamp` should be greater or equal than the last L2 block timestamp and less or equal than the L1 block timestamp when `_finalizeBlocks` is executed. However the first check is missing, blocks with incorrect timestamp could be finalized, causing unintended system behavior\\n```\\nif (blockInfo.l2BlockTimestamp >= block.timestamp) {\\n revert BlockTimestampError();\\n}\\n```\\n
Add the missing timestamp check
null
```\\nif (blockInfo.l2BlockTimestamp >= block.timestamp) {\\n revert BlockTimestampError();\\n}\\n```\\n
Rate Limiting Affecting the Usability and User's Funds Safety
medium
In `claimMessage` of `L1MessageService` and `sendMessage` function of `L1MessageService` contract, function `_addUsedAmount` is used to rate limit the Ether amount (1000 Eth) sent from L2 to L1 in a time period (24 hours), this is problematic, usually user sends the funds to L1 when they need to exit from L2 to L1 especially when some security issues happened affecting their funds safety on L2, if there is a limit, the limit can be reached quickly by some whale sending large amount of Ether to L1, while other users cannot withdraw their funds to L1, putting their funds at risk. In addition, the limit can only be set and changed by the security council and security council can also pause message service at any time, blocking user withdraw funds from L2, this makes the L2->L1 message service more centralized.\\n```\\n\\_addUsedAmount(\\_fee + \\_value);\\n```\\n\\n```\\n\\_addUsedAmount(msg.value);\\n```\\n\\n```\\nfunction \\_addUsedAmount(uint256 \\_usedAmount) internal {\\n uint256 currentPeriodAmountTemp;\\n\\n if (currentPeriodEnd < block.timestamp) {\\n // Update period before proceeding\\n currentPeriodEnd = block.timestamp + periodInSeconds;\\n currentPeriodAmountTemp = \\_usedAmount;\\n } else {\\n currentPeriodAmountTemp = currentPeriodAmountInWei + \\_usedAmount;\\n }\\n\\n if (currentPeriodAmountTemp > limitInWei) {\\n revert RateLimitExceeded();\\n }\\n\\n currentPeriodAmountInWei = currentPeriodAmountTemp;\\n}\\n```\\n
Remove rate limiting for L2->L1 message service
null
```\\n\\_addUsedAmount(\\_fee + \\_value);\\n```\\n
Front Running claimMessage on L1 and L2
medium
The front-runner on L1 or L2 can front run the `claimMessage` transaction, as long as the `fee` is greater than the gas cost of the claiming the message and `feeRecipient` is not set, consequently the `fee` will be transferred to the message.sender(the front runner) once the message is claimed. As a result, postman would lose the incentive to deliver(claim) the message on the destination layer.\\n```\\nif (\\_fee > 0) {\\n address feeReceiver = \\_feeRecipient == address(0) ? msg.sender : \\_feeRecipient;\\n (bool feePaymentSuccess, ) = feeReceiver.call{ value: \\_fee }("");\\n if (!feePaymentSuccess) {\\n revert FeePaymentFailed(feeReceiver);\\n }\\n```\\n\\n```\\nif (\\_fee > 0) {\\n address feeReceiver = \\_feeRecipient == address(0) ? msg.sender : \\_feeRecipient;\\n (bool feePaymentSuccess, ) = feeReceiver.call{ value: \\_fee }("");\\n if (!feePaymentSuccess) {\\n revert FeePaymentFailed(feeReceiver);\\n }\\n}\\n```\\n
There are a few protections against front running including flashbots service. Another option to mitigate front running is to avoid using msg.sender and have user use the signed `claimMessage` transaction by the Postman to claim the message on the destination layer
null
```\\nif (\\_fee > 0) {\\n address feeReceiver = \\_feeRecipient == address(0) ? msg.sender : \\_feeRecipient;\\n (bool feePaymentSuccess, ) = feeReceiver.call{ value: \\_fee }("");\\n if (!feePaymentSuccess) {\\n revert FeePaymentFailed(feeReceiver);\\n }\\n```\\n
Contracts Not Well Designed for Upgrades
medium
Inconsistent Storage Layout\\nThe Contracts introduce some buffer space in the storage layout to cope with the scenarios where new storage variables can be added if a need exists to upgrade the contracts to a newer version. This helps in reducing the chances of potential storage collisions. However, the storage layout concerning the buffer space is inconsistent, and multiple variations have been observed.\\n`PauseManager`, `RateLimitter`, and `MessageServiceBase` adds a buffer space of 10, contrary to other contracts which define the space as 50.\\n```\\nuint256[10] private \\_gap;\\n```\\n\\n```\\nuint256[10] private \\_gap;\\n```\\n\\n```\\nuint256[10] private \\_\\_base\\_gap;\\n```\\n\\n`L2MessageService` defines the buffer space prior to its existing storage variables.\\n```\\nuint256[50] private \\_\\_gap\\_L2MessageService;\\n```\\n\\nIf there exists a need to inherit from this contract in the future, the derived contract has to define the buffer space first, similar to `L2MessageService`. If it doesn't, `L2MessageService` can't have more storage variables. If it adds them, it will collide with the derived contract's storage slots.\\n2. `RateLimiter` and `MessageServiceBase` initializes values without the modifier `onlyInitializing`\\n```\\nfunction \\_\\_RateLimiter\\_init(uint256 \\_periodInSeconds, uint256 \\_limitInWei) internal {\\n```\\n\\n```\\nfunction \\_init\\_MessageServiceBase(address \\_messageService, address \\_remoteSender) internal {\\n```\\n\\nThe modifier `onlyInitializing` makes sure that the function should only be invoked by a function marked as `initializer`. However, it is absent here, which means these are normal internal functions that can be utilized in any other function, thus opening opportunities for errors.
Define a consistent storage layout. Consider a positive number `n` for the number of buffer space slots, such that, it is equal to any arbitrary number `d - No. of occupied storage slots`. For instance, if the arbitrary number is 50, and the contract has 20 occupied storage slots, the buffer space can be 50-20 = 30. It will maintain a consistent storage layout throughout the inheritance hierarchy.\\nFollow a consistent approach to defining buffer space. Currently, all the contracts, define the buffer space after their occupied storage slots, so it should be maintained in the `L2MessageService` as well.\\nDefine functions `__RateLimiter_init` and `_init_MessageServiceBase` as `onlyInitializing`.
null
```\\nuint256[10] private \\_gap;\\n```\\n
Potential Code Corrections
low
Function `_updateL1L2MessageStatusToReceived` and `addL1L2MessageHashes` allows status update for already received/sent/claimed messages.\\n```\\nfunction \\_updateL1L2MessageStatusToReceived(bytes32[] memory \\_messageHashes) internal {\\n uint256 messageHashArrayLength = \\_messageHashes.length;\\n\\n for (uint256 i; i < messageHashArrayLength; ) {\\n bytes32 messageHash = \\_messageHashes[i];\\n uint256 existingStatus = outboxL1L2MessageStatus[messageHash];\\n\\n if (existingStatus == INBOX\\_STATUS\\_UNKNOWN) {\\n revert L1L2MessageNotSent(messageHash);\\n }\\n\\n if (existingStatus != OUTBOX\\_STATUS\\_RECEIVED) {\\n outboxL1L2MessageStatus[messageHash] = OUTBOX\\_STATUS\\_RECEIVED;\\n }\\n\\n unchecked {\\n i++;\\n }\\n }\\n\\n emit L1L2MessagesReceivedOnL2(\\_messageHashes);\\n}\\n```\\n\\n```\\nfunction addL1L2MessageHashes(bytes32[] calldata \\_messageHashes) external onlyRole(L1\\_L2\\_MESSAGE\\_SETTER\\_ROLE) {\\n uint256 messageHashesLength = \\_messageHashes.length;\\n\\n if (messageHashesLength > 100) {\\n revert MessageHashesListLengthHigherThanOneHundred(messageHashesLength);\\n }\\n\\n for (uint256 i; i < messageHashesLength; ) {\\n bytes32 messageHash = \\_messageHashes[i];\\n if (inboxL1L2MessageStatus[messageHash] == INBOX\\_STATUS\\_UNKNOWN) {\\n inboxL1L2MessageStatus[messageHash] = INBOX\\_STATUS\\_RECEIVED;\\n }\\n unchecked {\\n i++;\\n }\\n }\\n\\n emit L1L2MessageHashesAddedToInbox(\\_messageHashes);\\n```\\n\\nIt may trigger false alarms, as they will still be a part of `L1L2MessagesReceivedOnL2` and `L1L2MessageHashesAddedToInbox`.\\n`_updateL1L2MessageStatusToReceived` checks the status of L1->L2 messages as:\\n```\\nif (existingStatus == INBOX\\_STATUS\\_UNKNOWN) {\\n revert L1L2MessageNotSent(messageHash);\\n}\\n```\\n\\nHowever, the status is need to be checked with `OUTBOX_STATUS_UNKNOWN` instead of `INBOX_STATUS_UNKNOWN` as it is an outbox message. This creates a hindrance in the code readability and should be fixed.\\nArray `timestampHashes` stores `l2BlockTimestamp` as integers, contrary to the hashes that the variable name states.\\n```\\ntimestampHashes[i] = blockInfo.l2BlockTimestamp;\\n```\\n\\nUnused error declaration\\n```\\n \\* dev Thrown when the decoding action is invalid.\\n \\*/\\n\\nerror InvalidAction();\\n```\\n\\nTransactionDecoder defines an error as `InvalidAction` which is supposed to be thrown when the decoding action is invalid, as stated in NATSPEC comment. However, it is currently unutilized.
Only update the status for sent messages in `_updateL1L2MessageStatusToReceived`, and unknown messages in `addL1L2MessageHashes` and revert otherwise, to avoid off-chain accounting errors.\\nCheck the status of L1->L2 sent message with `OUTBOX_STATUS_UNKNOWN` to increase code readability.\\nEither store timestamp hashes in the variable `timestampHashes` or update the variable name likewise.\\nRemove the error declaration if it is not serving any purpose.
null
```\\nfunction \\_updateL1L2MessageStatusToReceived(bytes32[] memory \\_messageHashes) internal {\\n uint256 messageHashArrayLength = \\_messageHashes.length;\\n\\n for (uint256 i; i < messageHashArrayLength; ) {\\n bytes32 messageHash = \\_messageHashes[i];\\n uint256 existingStatus = outboxL1L2MessageStatus[messageHash];\\n\\n if (existingStatus == INBOX\\_STATUS\\_UNKNOWN) {\\n revert L1L2MessageNotSent(messageHash);\\n }\\n\\n if (existingStatus != OUTBOX\\_STATUS\\_RECEIVED) {\\n outboxL1L2MessageStatus[messageHash] = OUTBOX\\_STATUS\\_RECEIVED;\\n }\\n\\n unchecked {\\n i++;\\n }\\n }\\n\\n emit L1L2MessagesReceivedOnL2(\\_messageHashes);\\n}\\n```\\n
TransactionDecoder Does Not Account for the Missing Elements While Decoding a Transaction
low
The library tries to decode calldata from different transaction types, by jumping to the position of calldata element in the rlp encoding. These positions are:\\nEIP1559: 8\\nEIP2930: 7\\nLegacy: 6\\n```\\ndata = it.\\_skipTo(8).\\_toBytes();\\n```\\n\\n```\\ndata = it.\\_skipTo(7).\\_toBytes();\\n```\\n\\n```\\ndata = it.\\_skipTo(6).\\_toBytes();\\n```\\n\\nHowever, the decoder doesn't check whether the required element is there or not in the encoding provided.\\nThe decoder uses the library RLPReader to skip to the desired element in encoding. However, it doesn't revert in case there are not enough elements to skip to, and will simply return byte `0x00`, while still completing unnecessary iterations.\\n```\\nfunction \\_skipTo(Iterator memory \\_self, uint256 \\_skipToNum) internal pure returns (RLPItem memory item) {\\n uint256 ptr = \\_self.nextPtr;\\n uint256 itemLength = \\_itemLength(ptr);\\n \\_self.nextPtr = ptr + itemLength;\\n\\n for (uint256 i; i < \\_skipToNum - 1; ) {\\n ptr = \\_self.nextPtr;\\n itemLength = \\_itemLength(ptr);\\n \\_self.nextPtr = ptr + itemLength;\\n\\n unchecked {\\n i++;\\n }\\n }\\n\\n item.len = itemLength;\\n item.memPtr = ptr;\\n}\\n```\\n\\nAlthough it doesn't impose any security issue, as `ZkEvmV2` tries to decode an array of bytes32 hashes from the rlp encoded transaction. However, it may still lead to errors in other use cases if not handled correctly.\\n```\\nCodecV2.\\_extractXDomainAddHashes(TransactionDecoder.decodeTransaction(\\_transactions[\\_batchReceptionIndices[i]]))\\n```\\n
rlp library should revert if there are not enough elements to skip to in the encoding.
null
```\\ndata = it.\\_skipTo(8).\\_toBytes();\\n```\\n
Incomplete Message State Check When Claiming Messages on L1 and L2
low
When claiming message on L1 orL2, `_updateL2L1MessageStatusToClaimed` and `_updateL1L2MessageStatusToClaimed` are called to update the message status, however the message state check only checks status `INBOX_STATUS_RECEIVED` and is missing status `INBOX_STATUS_UNKNOWN`, which means the message is not picked up by the coordinator or the message is not sent on L1 or L2 and should be reverted. As a result, the claiming message could be reverted with a incorrect reason.\\n```\\nfunction \\_updateL2L1MessageStatusToClaimed(bytes32 \\_messageHash) internal {\\n if (inboxL2L1MessageStatus[\\_messageHash] != INBOX\\_STATUS\\_RECEIVED) {\\n revert MessageAlreadyClaimed();\\n }\\n\\n delete inboxL2L1MessageStatus[\\_messageHash];\\n\\n emit L2L1MessageClaimed(\\_messageHash);\\n}\\n```\\n\\n```\\n function \\_updateL1L2MessageStatusToClaimed(bytes32 \\_messageHash) internal {\\n if (inboxL1L2MessageStatus[\\_messageHash] != INBOX\\_STATUS\\_RECEIVED) {\\n revert MessageAlreadyClaimed();\\n }\\n\\n inboxL1L2MessageStatus[\\_messageHash] = INBOX\\_STATUS\\_CLAIMED;\\n\\n emit L1L2MessageClaimed(\\_messageHash);\\n }\\n}\\n```\\n
Add the missing status check and relevant revert reason for status `INBOX_STATUS_UNKNOWN`
null
```\\nfunction \\_updateL2L1MessageStatusToClaimed(bytes32 \\_messageHash) internal {\\n if (inboxL2L1MessageStatus[\\_messageHash] != INBOX\\_STATUS\\_RECEIVED) {\\n revert MessageAlreadyClaimed();\\n }\\n\\n delete inboxL2L1MessageStatus[\\_messageHash];\\n\\n emit L2L1MessageClaimed(\\_messageHash);\\n}\\n```\\n
Events Which May Trigger False Alarms
low
1- `PauseManager` allows `PAUSE_MANAGER_ROLE` to pause/unpause a type as:\\n```\\nfunction pauseByType(bytes32 \\_pauseType) external onlyRole(PAUSE\\_MANAGER\\_ROLE) {\\n pauseTypeStatuses[\\_pauseType] = true;\\n emit Paused(\\_msgSender(), \\_pauseType);\\n}\\n```\\n\\n```\\nfunction unPauseByType(bytes32 \\_pauseType) external onlyRole(PAUSE\\_MANAGER\\_ROLE) {\\n pauseTypeStatuses[\\_pauseType] = false;\\n emit UnPaused(\\_msgSender(), \\_pauseType);\\n}\\n```\\n\\nHowever, the functions don't check whether the given `_pauseType` has already been paused/unpaused or not and emits an event every time called. This may trigger false alarms for off-chain monitoring tools and may cause unnecessary panic.\\n2 - `RateLimitter` allows resetting the limit and used amount as:\\n```\\nfunction resetRateLimitAmount(uint256 \\_amount) external onlyRole(RATE\\_LIMIT\\_SETTER\\_ROLE) {\\n bool amountUsedLoweredToLimit;\\n\\n if (\\_amount < currentPeriodAmountInWei) {\\n currentPeriodAmountInWei = \\_amount;\\n amountUsedLoweredToLimit = true;\\n }\\n\\n limitInWei = \\_amount;\\n\\n emit LimitAmountChange(\\_msgSender(), \\_amount, amountUsedLoweredToLimit);\\n}\\n```\\n\\n```\\nfunction resetAmountUsedInPeriod() external onlyRole(RATE\\_LIMIT\\_SETTER\\_ROLE) {\\n currentPeriodAmountInWei = 0;\\n\\n emit AmountUsedInPeriodReset(\\_msgSender());\\n}\\n```\\n\\nHowever, it doesn't account for the scenarios where the function can be called after the current period ends and before a new period gets started. As the `currentPeriodAmountInWei` will still be holding the used amount of the last period, if the `RATE_LIMIT_SETTER_ROLE` tries to reset the limit with the lower value than the used amount, the function will emit the same event `LimitAmountChange` with the flag `amountUsedLoweredToLimit`.\\nAdding to it, the function will make `currentPeriodAmountInWei` = `limitInWei`, which means no more amount can be added as the used amount until the used amount is manually reset to 0, which points out to the fact that the used amount should be automatically reset, once the current period ends. Although it is handled automatically in function `_addUsedAmount`, however, if the new period has not yet started, it is supposed to be done in a 2-step approach i.e., first, reset the used amount and then the limit. It can be simplified by checking for the current period in the `resetRateLimitAmount` function itself.\\nThe same goes for the scenario where the used amount is reset after the current period ends. It will emit the same event as `AmountUsedInPeriodReset`\\nThese can create unnecessary confusion, as the events emitted don't consider the abovementioned scenarios.
Consider adding checks to make sure already paused/unpaused types don't emit respective events.\\nConsider emitting different events, or adding a flag in the events, that makes it easy to differentiate whether the limit and used amount are reset in the current period or after it has ended.\\nReset `currentPeriodAmountInWei` in function `resetRateLimitAmount` itself if the current period has ended.
null
```\\nfunction pauseByType(bytes32 \\_pauseType) external onlyRole(PAUSE\\_MANAGER\\_ROLE) {\\n pauseTypeStatuses[\\_pauseType] = true;\\n emit Paused(\\_msgSender(), \\_pauseType);\\n}\\n```\\n
No Proper Trusted Setup Acknowledged
high
Linea uses Plonk proof system, which needs a preprocessed CRS (Common Reference String) for proving and verification, the Plonk system security is based on the existence of a trusted setup ceremony to compute the CRS, the current verifier uses a CRS created by one single party, which requires fully trust of the party to delete the toxic waste (trapdoor) which can be used to generate forged proof, undermining the security of the entire system\\n```\\nuint256 constant g2\\_srs\\_0\\_x\\_0 = 11559732032986387107991004021392285783925812861821192530917403151452391805634;\\nuint256 constant g2\\_srs\\_0\\_x\\_1 = 10857046999023057135944570762232829481370756359578518086990519993285655852781;\\nuint256 constant g2\\_srs\\_0\\_y\\_0 = 4082367875863433681332203403145435568316851327593401208105741076214120093531;\\nuint256 constant g2\\_srs\\_0\\_y\\_1 = 8495653923123431417604973247489272438418190587263600148770280649306958101930;\\n\\nuint256 constant g2\\_srs\\_1\\_x\\_0 = 18469474764091300207969441002824674761417641526767908873143851616926597782709;\\nuint256 constant g2\\_srs\\_1\\_x\\_1 = 17691709543839494245591259280773972507311536864513996659348773884770927133474;\\nuint256 constant g2\\_srs\\_1\\_y\\_0 = 2799122126101651639961126614695310298819570600001757598712033559848160757380;\\nuint256 constant g2\\_srs\\_1\\_y\\_1 = 3054480525781015242495808388429905877188466478626784485318957932446534030175;\\n```\\n
Conduct a proper MPC to generate CRS like the Powers of Tau MPC or use a trustworthy CRS generated by an exisiting audited trusted setup like Aztec's ignition
null
```\\nuint256 constant g2\\_srs\\_0\\_x\\_0 = 11559732032986387107991004021392285783925812861821192530917403151452391805634;\\nuint256 constant g2\\_srs\\_0\\_x\\_1 = 10857046999023057135944570762232829481370756359578518086990519993285655852781;\\nuint256 constant g2\\_srs\\_0\\_y\\_0 = 4082367875863433681332203403145435568316851327593401208105741076214120093531;\\nuint256 constant g2\\_srs\\_0\\_y\\_1 = 8495653923123431417604973247489272438418190587263600148770280649306958101930;\\n\\nuint256 constant g2\\_srs\\_1\\_x\\_0 = 18469474764091300207969441002824674761417641526767908873143851616926597782709;\\nuint256 constant g2\\_srs\\_1\\_x\\_1 = 17691709543839494245591259280773972507311536864513996659348773884770927133474;\\nuint256 constant g2\\_srs\\_1\\_y\\_0 = 2799122126101651639961126614695310298819570600001757598712033559848160757380;\\nuint256 constant g2\\_srs\\_1\\_y\\_1 = 3054480525781015242495808388429905877188466478626784485318957932446534030175;\\n```\\n
Missing Verifying Paring Check Result
high
In function `batch_verify_multi_points`, the SNARK paring check is done by calling paring pre-compile `let l_success := staticcall(sub(gas(), 2000),8,mPtr,0x180,0x00,0x20)` and the only the execution status is stored in the final success state (state_success), but the the paring check result which is stored in 0x00 is not stored and checked, which means if the paring check result is 0 (pairing check failed), the proof would still pass verification, e.g. invalid proof with incorrect proof element `proof_openings_selector_commit_api_at_zeta` would pass the paring check. As a result it breaks the SNARK paring verification.\\n```\\nlet l\\_success := staticcall(sub(gas(), 2000),8,mPtr,0x180,0x00,0x20)\\n// l\\_success := true\\nmstore(add(state, state\\_success), and(l\\_success,mload(add(state, state\\_success))))\\n```\\n\\nAnother example is, if either of the following is sent as a point at infinity or (0,0) as (x,y) co-ordinate:\\ncommitment to the opening proof polynomial Wz\\ncommitment to the opening proof polynomial Wzw\\nThe proof will still work, since the pairing result is not being checked.
Verify paring check result and store it in the final success state after calling the paring pre-compile
null
```\\nlet l\\_success := staticcall(sub(gas(), 2000),8,mPtr,0x180,0x00,0x20)\\n// l\\_success := true\\nmstore(add(state, state\\_success), and(l\\_success,mload(add(state, state\\_success))))\\n```\\n
Gas Greifing and Missing Return Status Check for staticcall(s), May Lead to Unexpected Outcomes Partially Addressed
high
The gas supplied to the staticcall(s), is calculated by subtracting `2000` from the remaining gas at this point in time. However, if not provided enough gas, the staticcall(s) may fail and there will be no return data, and the execution will continue with the stale data that was previously there at the memory location specified by the return offset with the staticcall(s).\\n1- Predictable Derivation of Challenges\\nThe function `derive_gamma_beta_alpha_zeta` is used to derive the challenge values `gamma`, `beta`, `alpha`, `zeta`. These values are derived from the prover's transcript by hashing defined parameters and are supposed to be unpredictable by either the prover or the verifier. The hash is collected with the help of SHA2-256 precompile. The values are considered unpredictable, due to the assumption that SHA2-256 acts as a random oracle and it would be computationally infeasible for an attacker to find the pre-image of `gamma`. However, the assumption might be wrong.\\n```\\npop(staticcall(sub(gas(), 2000), 0x2, add(mPtr, 0x1b), size, mPtr, 0x20)) //0x1b -> 000.."gamma"\\n```\\n\\n```\\npop(staticcall(sub(gas(), 2000), 0x2, add(mPtr, 0x1c), 0x24, mPtr, 0x20)) //0x1b -> 000.."gamma"\\n```\\n\\n```\\npop(staticcall(sub(gas(), 2000), 0x2, add(mPtr, 0x1b), 0x65, mPtr, 0x20)) //0x1b -> 000.."gamma"\\n```\\n\\n```\\npop(staticcall(sub(gas(), 2000), 0x2, add(mPtr, 0x1c), 0xe4, mPtr, 0x20))\\n```\\n\\n```\\npop(staticcall(sub(gas(), 2000), 0x2, add(mPtr,start\\_input), size\\_input, add(state, state\\_gamma\\_kzg), 0x20))\\n```\\n\\nIf the staticcall(s) fails, it will make the challenge values to be predictable and may help the prover in forging proofs and launching other adversarial attacks.\\n2- Incorrect Exponentiation\\nFunctions `compute_ith_lagrange_at_z`, `compute_pi`, and `verify` compute modular exponentiation by making a `staticcall` to the precompile `modexp` as:\\n```\\npop(staticcall(sub(gas(), 2000),0x05,mPtr,0xc0,0x00,0x20))\\n```\\n\\n```\\npop(staticcall(sub(gas(), 2000),0x05,mPtr,0xc0,mPtr,0x20))\\n```\\n\\n```\\npop(staticcall(sub(gas(), 2000),0x05,mPtr,0xc0,mPtr,0x20))\\n```\\n\\nHowever, if not supplied enough gas, the staticcall(s) will fail, thus returning no result and the execution will continue with the stale data.\\n3. Incorrect Point Addition and Scalar Multiplication\\n```\\npop(staticcall(sub(gas(), 2000),7,folded\\_evals\\_commit,0x60,folded\\_evals\\_commit,0x40))\\n```\\n\\n```\\nlet l\\_success := staticcall(sub(gas(), 2000),6,mPtr,0x80,dst,0x40)\\n```\\n\\n```\\nlet l\\_success := staticcall(sub(gas(), 2000),7,mPtr,0x60,dst,0x40)\\n```\\n\\n```\\nlet l\\_success := staticcall(sub(gas(), 2000),7,mPtr,0x60,mPtr,0x40)\\n```\\n\\n```\\nl\\_success := and(l\\_success, staticcall(sub(gas(), 2000),6,mPtr,0x80,dst, 0x40))\\n```\\n\\nFor the same reason, `point_add`, `point_mul`, and `point_acc_mul` will return incorrect results. Matter of fact, `point_acc_mul` will not revert even if the scalar multiplication fails in the first step. Because, the memory location specified for the return offset, will still be containing the old (x,y) coordinates of `src`, which are points on the curve. Hence, it will proceed by incorrectly adding (x,y) coordinates of `dst` with it.\\nHowever, it will not be practically possible to conduct a gas griefing attack for staticcall(s) at the start of the top-level transaction. As it will require an attacker to pass a very low amount of gas to make the `staticcall` fail, but at the same time, that would not be enough to make the top-level transaction execute entirely and not run out of gas. But, this can still be conducted for the staticcall(s) that are executed at the near end of the top-level transaction.
Check the returned status of the staticcall and revert if any of the staticcall's return status has been 0.\\nAlso fix the comments mentioned for every staticcall, for instance: the function `derive_beta` says `0x1b -> 000.."gamma"` while the memory pointer holds the ASCII value of string `beta`
null
```\\npop(staticcall(sub(gas(), 2000), 0x2, add(mPtr, 0x1b), size, mPtr, 0x20)) //0x1b -> 000.."gamma"\\n```\\n
Missing Scalar Field Range Check in Scalar Multiplication
high
There is no field element range check on scalar field proof elements e.g. `proof_l_at_zeta, proof_r_at_zeta, proof_o_at_zeta, proof_s1_at_zeta,proof_s2_at_zeta, proof_grand_product_at_zeta_omega` as mentioned in the step 2 of the verifier's algorithm in the Plonk paper. The scalar multiplication functions `point_mul` and `point_acc_mul` call precompile ECMUL, according to EIP-169 , which would verify the point P is on curve and P.x and P.y is less than the base field modulus, however it doesn't check the scalar `s` is less than scalar field modulus, if `s` is greater than scalar field modulus `r_mod`, it would cause unintended behavior of the contract, specifically if the scalar field proof element `e` are replaced by `e` + `r_mod`, the proof would still pass verification. Although in Plonk's case, there is few attacker vectors could exists be based on this kind of proof malleability.\\n```\\nfunction point\\_mul(dst,src,s, mPtr) {\\n // let mPtr := add(mload(0x40), state\\_last\\_mem)\\n let state := mload(0x40)\\n mstore(mPtr,mload(src))\\n mstore(add(mPtr,0x20),mload(add(src,0x20)))\\n mstore(add(mPtr,0x40),s)\\n let l\\_success := staticcall(sub(gas(), 2000),7,mPtr,0x60,dst,0x40)\\n mstore(add(state, state\\_success), and(l\\_success,mload(add(state, state\\_success))))\\n}\\n\\n// dst <- dst + [s]src (Elliptic curve)\\nfunction point\\_acc\\_mul(dst,src,s, mPtr) {\\n let state := mload(0x40)\\n mstore(mPtr,mload(src))\\n mstore(add(mPtr,0x20),mload(add(src,0x20)))\\n mstore(add(mPtr,0x40),s)\\n let l\\_success := staticcall(sub(gas(), 2000),7,mPtr,0x60,mPtr,0x40)\\n mstore(add(mPtr,0x40),mload(dst))\\n mstore(add(mPtr,0x60),mload(add(dst,0x20)))\\n l\\_success := and(l\\_success, staticcall(sub(gas(), 2000),6,mPtr,0x80,dst, 0x40))\\n mstore(add(state, state\\_success), and(l\\_success,mload(add(state, state\\_success))))\\n}\\n```\\n
Add scalar field range check on scalar multiplication functions `point_mul` and `point_acc_mul` or the scalar field proof elements.
null
```\\nfunction point\\_mul(dst,src,s, mPtr) {\\n // let mPtr := add(mload(0x40), state\\_last\\_mem)\\n let state := mload(0x40)\\n mstore(mPtr,mload(src))\\n mstore(add(mPtr,0x20),mload(add(src,0x20)))\\n mstore(add(mPtr,0x40),s)\\n let l\\_success := staticcall(sub(gas(), 2000),7,mPtr,0x60,dst,0x40)\\n mstore(add(state, state\\_success), and(l\\_success,mload(add(state, state\\_success))))\\n}\\n\\n// dst <- dst + [s]src (Elliptic curve)\\nfunction point\\_acc\\_mul(dst,src,s, mPtr) {\\n let state := mload(0x40)\\n mstore(mPtr,mload(src))\\n mstore(add(mPtr,0x20),mload(add(src,0x20)))\\n mstore(add(mPtr,0x40),s)\\n let l\\_success := staticcall(sub(gas(), 2000),7,mPtr,0x60,mPtr,0x40)\\n mstore(add(mPtr,0x40),mload(dst))\\n mstore(add(mPtr,0x60),mload(add(dst,0x20)))\\n l\\_success := and(l\\_success, staticcall(sub(gas(), 2000),6,mPtr,0x80,dst, 0x40))\\n mstore(add(state, state\\_success), and(l\\_success,mload(add(state, state\\_success))))\\n}\\n```\\n
Missing Public Inputs Range Check
high
The public input is an array of `uint256` numbers, there is no check if each public input is less than SNARK scalar field modulus `r_mod`, as mentioned in the step 3 of the verifier's algorithm in the Plonk paper. Since public inputs are involved computation of `Pi` in the plonk gate which is in the SNARK scalar field, without the check, it might cause scalar field overflow and the verification contract would fail and revert. To prevent overflow and other unintended behavior there should be a range check for the public inputs.\\n```\\nfunction Verify(bytes memory proof, uint256[] memory public\\_inputs)\\n```\\n\\n```\\nsum\\_pi\\_wo\\_api\\_commit(add(public\\_inputs,0x20), mload(public\\_inputs), zeta)\\npi := mload(mload(0x40))\\n\\nfunction sum\\_pi\\_wo\\_api\\_commit(ins, n, z) {\\n let li := mload(0x40)\\n batch\\_compute\\_lagranges\\_at\\_z(z, n, li)\\n let res := 0\\n let tmp := 0\\n for {let i:=0} lt(i,n) {i:=add(i,1)}\\n {\\n tmp := mulmod(mload(li), mload(ins), r\\_mod)\\n res := addmod(res, tmp, r\\_mod)\\n li := add(li, 0x20)\\n ins := add(ins, 0x20)\\n }\\n mstore(mload(0x40), res)\\n}\\n```\\n
Add range check for the public inputs `require(input[i] < r_mod, "public inputs greater than snark scalar field");`
null
```\\nfunction Verify(bytes memory proof, uint256[] memory public\\_inputs)\\n```\\n
Loading Arbitrary Data as Wire Commitments Acknowledged
medium
Function `load_wire_commitments_commit_api` as the name suggests, loads wire commitments from the proof into the memory array `wire_commitments`. The array is made to hold 2 values per commitment or the size of the array is 2 * `vk_nb_commitments_commit_api`, which makes sense as these 2 values are the x & y co-ordinates of the commitments.\\n```\\nuint256[] memory wire\\_committed\\_commitments = new uint256[](2\\*vk\\_nb\\_commitments\\_commit\\_api);\\nload\\_wire\\_commitments\\_commit\\_api(wire\\_committed\\_commitments, proof);\\n```\\n\\nComing back to the functionload_wire_commitments_commit_api, it extracts both the x & y coordinates of a commitment in a single iteration. However, the loop runs `2 * vk_nb_commitments_commit_api`, or in other words, twice as many of the required iterations. For instance, if there is 1 commitment, it will run two times. The first iteration will pick up the actual coordinates and the second one can pick any arbitrary data from the proof(if passed) and load it into memory. Although, this data which has been loaded in an extra iteration seems harmless but still adds an overhead for the processing.\\n```\\nfor {let i:=0} lt(i, mul(vk\\_nb\\_commitments\\_commit\\_api,2)) {i:=add(i,1)}\\n```\\n
The number of iterations should be equal to the size of commitments, i.e., `vk_nb_commitments_commit_api`. So consider switching from:\\n```\\nfor {let i:=0} lt(i, mul(vk_nb_commitments_commit_api,2)) {i:=add(i,1)}\\n```\\n\\nto:\\n```\\nfor {let i:=0} lt(i, vk_nb_commitments_commit_api) {i:=add(i,1)}\\n```\\n
null
```\\nuint256[] memory wire\\_committed\\_commitments = new uint256[](2\\*vk\\_nb\\_commitments\\_commit\\_api);\\nload\\_wire\\_commitments\\_commit\\_api(wire\\_committed\\_commitments, proof);\\n```\\n
Makefile: Target Order
low
The target `all` in the Makefile ostensibly wants to run the targets `clean` and `solc` in that order.\\n```\\nall: clean solc\\n```\\n\\nHowever prerequisites in GNU Make are not ordered, and they might even run in parallel. In this case, this could cause spurious behavior like overwrite errors or files being deleted just after being created.
The Make way to ensure that targets run one after the other is\\n```\\nall: clean\\n $(MAKE) solc\\n```\\n\\nAlso `all` should be listed in the PHONY targets.
null
```\\nall: clean solc\\n```\\n
addPremium - A back runner may cause an insurance holder to lose their refunds by calling addPremium right after the original call
high
`addPremium` is a public function that can be called by anyone and that distributes the weekly premium payments to the pool manager and the rest of the pool share holders. If the collateral deposited is not enough to cover the total coverage offered to insurance holders for a given week, refunds are allocated pro rata for all insurance holders of that particular week and policy. However, in the current implementation, attackers can call `addPremium` right after the original call to `addPremium` but before the call to refund; this will cause the insurance holders to lose their refunds, which will be effectively locked forever in the contract (unless the contract is upgraded).\\n```\\nrefundMap[policyIndex\\_][week] = incomeMap[policyIndex\\_][week].mul(\\n allCovered.sub(maximumToCover)).div(allCovered);\\n```\\n
`addPremium` should contain a validation check in the beginning of the function that reverts for the case of `incomeMap[policyIndex_][week] = 0`.
null
```\\nrefundMap[policyIndex\\_][week] = incomeMap[policyIndex\\_][week].mul(\\n allCovered.sub(maximumToCover)).div(allCovered);\\n```\\n
refund - attacker can lock insurance holder's refunds by calling refund before a refund was allocated
high
`addPremium` is used to determine the `refund` amount that an insurance holder is eligible to claim. The amount is stored in the `refundMap` mapping and can then later be claimed by anyone on behalf of an insurance holder by calling `refund`. The `refund` function can't be called more than once for a given combination of `policyIndex_`, `week_`, and `who_`, as it would revert with an “Already refunded” error. This gives an attacker the opportunity to call `refund` on behalf of any insurance holder with value 0 inside the `refundMap`, causing any future `refund` allocated for that holder in a given week and for a given policy to be locked forever in the contract (unless the contract is upgraded).\\n```\\nfunction refund(\\n uint256 policyIndex\\_,\\n uint256 week\\_,\\n address who\\_\\n) external noReenter {\\n Coverage storage coverage = coverageMap[policyIndex\\_][week\\_][who\\_];\\n\\n require(!coverage.refunded, "Already refunded");\\n\\n uint256 allCovered = coveredMap[policyIndex\\_][week\\_];\\n uint256 amountToRefund = refundMap[policyIndex\\_][week\\_].mul(\\n coverage.amount).div(allCovered);\\n coverage.amount = coverage.amount.mul(\\n coverage.premium.sub(amountToRefund)).div(coverage.premium);\\n coverage.refunded = true;\\n\\n IERC20(baseToken).safeTransfer(who\\_, amountToRefund);\\n\\n if (eventAggregator != address(0)) {\\n IEventAggregator(eventAggregator).refund(\\n policyIndex\\_,\\n week\\_,\\n who\\_,\\n amountToRefund\\n );\\n }\\n}\\n```\\n
There should be a validation check at the beginning of the function that reverts if `refundMap[policyIndex_][week_] == 0`.
null
```\\nfunction refund(\\n uint256 policyIndex\\_,\\n uint256 week\\_,\\n address who\\_\\n) external noReenter {\\n Coverage storage coverage = coverageMap[policyIndex\\_][week\\_][who\\_];\\n\\n require(!coverage.refunded, "Already refunded");\\n\\n uint256 allCovered = coveredMap[policyIndex\\_][week\\_];\\n uint256 amountToRefund = refundMap[policyIndex\\_][week\\_].mul(\\n coverage.amount).div(allCovered);\\n coverage.amount = coverage.amount.mul(\\n coverage.premium.sub(amountToRefund)).div(coverage.premium);\\n coverage.refunded = true;\\n\\n IERC20(baseToken).safeTransfer(who\\_, amountToRefund);\\n\\n if (eventAggregator != address(0)) {\\n IEventAggregator(eventAggregator).refund(\\n policyIndex\\_,\\n week\\_,\\n who\\_,\\n amountToRefund\\n );\\n }\\n}\\n```\\n
addTidal, _updateUserTidal, withdrawTidal - wrong arithmetic calculations
high
To further incentivize sellers, anyone - although it will usually be the pool manager - can send an arbitrary amount of the Tidal token to a pool, which is then supposed to be distributed proportionally among the share owners. There are several flaws in the calculations that implement this mechanism:\\nA. addTidal:\\n```\\npoolInfo.accTidalPerShare = poolInfo.accTidalPerShare.add(\\n amount\\_.mul(SHARE\\_UNITS)).div(poolInfo.totalShare);\\n```\\n\\nThis should be:\\n```\\npoolInfo.accTidalPerShare = poolInfo.accTidalPerShare.add(\\n amount\\_.mul(SHARE\\_UNITS).div(poolInfo.totalShare));\\n```\\n\\nNote the different parenthesization. Without SafeMath:\\n```\\npoolInfo.accTidalPerShare += amount\\_ \\* SHARE\\_UNITS / poolInfo.totalShare;\\n```\\n\\nB. _updateUserTidal:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare.add(\\n userInfo.share).div(SHARE\\_UNITS);\\n```\\n\\nThis should be:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare.mul(\\n userInfo.share).div(SHARE\\_UNITS);\\n```\\n\\nNote that `add` has been replaced with `mul`. Without SafeMath:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare \\* userInfo.share / SHARE\\_UNITS;\\n```\\n\\nC. withdrawTidal:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare.add(userInfo.share);\\n```\\n\\nAs in B, this should be:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare.mul(\\n userInfo.share).div(SHARE\\_UNITS);\\n```\\n\\nNote that `add` has been replaced with `mul` and that a division by `SHARE_UNITS` has been appended. Without SafeMath:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare \\* userInfo.share / SHARE\\_UNITS;\\n```\\n\\nAs an additional minor point, the division in `addTidal` will revert with a panic (0x12) if the number of shares in the pool is zero. This case could be handled more gracefully.
Implement the fixes described above. The versions without `SafeMath` are easier to read and should be preferred; see https://github.com/ConsensysDiligence/tidal-audit-2023-04/issues/20.
null
```\\npoolInfo.accTidalPerShare = poolInfo.accTidalPerShare.add(\\n amount\\_.mul(SHARE\\_UNITS)).div(poolInfo.totalShare);\\n```\\n
claim - Incomplete and lenient implementation
high
In the current version of the code, the `claim` function is lacking crucial input validation logic as well as required state changes. Most of the process is implemented in other contracts or off-chain at the moment and is therefore out of scope for this audit, but there might still be issues caused by potential errors in the process. Moreover, pool manager and committee together have unlimited ownership of the deposits and can essentially withdraw all collateral to any desired address.\\n```\\nfunction claim(\\n uint256 policyIndex\\_,\\n uint256 amount\\_,\\n address receipient\\_\\n) external onlyPoolManager {\\n```\\n
To ensure a more secure claiming process, we propose adding the following logic to the `claim` function:\\n`refund` should be called at the beginning of the `claim` flow, so that the recipient's true coverage amount will be used.\\n`policyIndex` should be added as a parameter to this function, so that `coverageMap` can be used to validate that the amount claimed on behalf of a recipient is covered.\\nThe payout amount should be subtracted in the `coveredMap` and `coverageMap` mappings.
null
```\\nfunction claim(\\n uint256 policyIndex\\_,\\n uint256 amount\\_,\\n address receipient\\_\\n) external onlyPoolManager {\\n```\\n
buy - insurance buyers trying to increase their coverage amount will lose their previous coverage
high
When a user is willing to `buy` insurance, he is required to specify the desired amount (denoted as amount_) and to pay the entire premium upfront. In return, he receives the ownership over an entry inside the `coverageMap` mapping. If a user calls the `buy` function more than once for the same policy and time frame, his entry in the `coverageMap` will not represent the accumulated amount that he paid for but only the last coverage amount, which means previous coverage will be lost forever (unless the contract is upgraded).\\n```\\nfor (uint256 w = fromWeek\\_; w < toWeek\\_; ++w) {\\n incomeMap[policyIndex\\_][w] =\\n incomeMap[policyIndex\\_][w].add(premium);\\n coveredMap[policyIndex\\_][w] =\\n coveredMap[policyIndex\\_][w].add(amount\\_);\\n\\n require(coveredMap[policyIndex\\_][w] <= maximumToCover,\\n "Not enough to buy");\\n\\n coverageMap[policyIndex\\_][w][\\_msgSender()] = Coverage({\\n amount: amount\\_,\\n premium: premium,\\n refunded: false\\n });\\n}\\n```\\n
The coverage entry that represents the user's coverage should not be overwritten but should hold the accumulated amount of coverage instead.
null
```\\nfor (uint256 w = fromWeek\\_; w < toWeek\\_; ++w) {\\n incomeMap[policyIndex\\_][w] =\\n incomeMap[policyIndex\\_][w].add(premium);\\n coveredMap[policyIndex\\_][w] =\\n coveredMap[policyIndex\\_][w].add(amount\\_);\\n\\n require(coveredMap[policyIndex\\_][w] <= maximumToCover,\\n "Not enough to buy");\\n\\n coverageMap[policyIndex\\_][w][\\_msgSender()] = Coverage({\\n amount: amount\\_,\\n premium: premium,\\n refunded: false\\n });\\n}\\n```\\n
Several issues related to upgradeability of contracts
medium
We did not find a proxy contract or factory in the repository, but the README contains the following information:\\ncode/README.md:L11\\n```\\nEvery Pool is a standalone smart contract. It is made upgradeable with OpenZeppelin's Proxy Upgrade Pattern.\\n```\\n\\ncode/README.md:L56\\n```\\nAnd there will be multiple proxies and one implementation of the Pools, and one proxy and one implementation of EventAggregator.\\n```\\n\\nThere are several issues related to upgradeability or, generally, using the contracts as implementations for proxies. All recommendations in this report assume that it is not necessary to remain compatible with an existing deployment.\\nB. If upgradeability is supposed to work with inheritance, there should be dummy variables at the end of each contract in the inheritance hierarchy. Some of these have to be removed when “real” state variables are added. More precisely, it is conventional to use a fixed-size `uint256` array `__gap`, such that the consecutively occupied slots at the beginning (for the “real” state variables) add up to 50 with the size of the array. If state variables are added later, the gap's size has to be reduced accordingly to maintain this invariant. Currently, the contracts do not declare such a `__gap` variable.\\nC. Implementation contracts should not remain uninitalized. To prevent initialization by an attacker - which, in some cases, can have an impact on the proxy - the implementation contract's constructor should call `_disableInitializers`.
Refamiliarize yourself with the subtleties and pitfalls of upgradeable `contracts`, in particular regarding state variables and the storage gap. A lot of useful information can be found here.\\nOnly import from `contracts-upgradeable`, not from `contracts`.\\nAdd appropriately-sized storage gaps at least to `PoolModel`, `NonReentrancy`, and `EventAggregator`. (Note that adding a storage gap to `NonReentrancy` will break compatibility with existing deployments.) Ideally, add comments and warnings to each file that state variables may only be added at the end, that the storage gap's size has to be reduced accordingly, and that state variables must not be removed, rearranged, or in any way altered (e.g., type, `constant`, immutable). No state variables should ever be added to the `Pool` contract, and a comment should make that clear.\\nAdd a constructor to `Pool` and `EventAggregator` that calls `_disableInitializers`.
null
```\\nEvery Pool is a standalone smart contract. It is made upgradeable with OpenZeppelin's Proxy Upgrade Pattern.\\n```\\n
initialize - Committee members array can contain duplicates
medium
The initial committee members are given as array argument to the pool's `initialize` function. When the array is processed, there is no check for duplicates, and duplicates may also end up in the storage array `committeeArray`.\\n```\\nfor (uint256 i = 0; i < committeeMembers\\_.length; ++i) {\\n address member = committeeMembers\\_[i];\\n committeeArray.push(member);\\n committeeIndexPlusOne[member] = committeeArray.length;\\n}\\n```\\n\\nDuplicates will result in a discrepancy between the length of the array - which is later interpreted as the number of committee members - and the actual number of (different) committee members. This could lead to more problems, such as an insufficient committee size to reach the threshold.
The `initialize` function should verify in the loop that `member` hasn't been added before. Note that `_executeAddToCommittee` refuses to add someone who is already in the committee, and the same technique can be employed here.
null
```\\nfor (uint256 i = 0; i < committeeMembers\\_.length; ++i) {\\n address member = committeeMembers\\_[i];\\n committeeArray.push(member);\\n committeeIndexPlusOne[member] = committeeArray.length;\\n}\\n```\\n
Pool.buy- Users may end up paying more than intended due to changes in policy.weeklyPremium
medium
The price that an insurance buyer has to pay for insurance is determined by the duration of the coverage and the `weeklyPremium`. The price increases as the `weeklyPremium` increases. If a `buy` transaction is waiting in the mempool but eventually front-run by another transaction that increases `weeklyPremium`, the user will end up paying more than they anticipated for the same insurance coverage (assuming their allowance to the `Pool` contract is unlimited or at least higher than what they expected to pay).\\n```\\nuint256 premium = amount\\_.mul(policy.weeklyPremium).div(RATIO\\_BASE);\\nuint256 allPremium = premium.mul(toWeek\\_.sub(fromWeek\\_));\\n```\\n
Consider adding a parameter for the maximum amount to pay, and make sure that the transaction will revert if `allPremium` is greater than this maximum value.
null
```\\nuint256 premium = amount\\_.mul(policy.weeklyPremium).div(RATIO\\_BASE);\\nuint256 allPremium = premium.mul(toWeek\\_.sub(fromWeek\\_));\\n```\\n
Missing validation checks in execute
medium
The `Pool` contract implements a threshold voting mechanism for some changes in the contract state, where either the pool manager or a committee member can propose a change by calling `claim`, `changePoolManager`, `addToCommittee`, `removeFromCommittee`, or `changeCommitteeThreshold`, and then the committee has a time period for voting. If the threshold is reached during this period, then anyone can call `execute` to `execute` the state change.\\nWhile some validation checks are implemented in the proposal phase, this is not enough to ensure that business logic rules around these changes are completely enforced.\\n`_executeRemoveFromCommittee` - While the `removeFromCommittee` function makes sure that `committeeArray.length > committeeThreshold`, i.e., that there should always be enough committee members to reach the threshold, the same validation check is not enforced in `_executeRemoveFromCommittee`. To better illustrate the issue, let's consider the following example: `committeeArray.length = 5`, `committeeThreshold = 4`, and now `removeFromCommittee` is called two times in a row, where the second call is made before the first call reaches the threshold. In this case, both requests will be executed successfully, and we end up with `committeeArray.length = 3` and `committeeThreshold = 4`, which is clearly not desired.\\n`_executeChangeCommitteeThreshold` - Applying the same concept here, this function lacks the validation check of `threshold_ <= committeeArray.length`, leading to the same issue as above. Let's consider the following example: `committeeArray.length = 3`, `committeeThreshold = 2`, and now changeCommitteeThresholdis called with `threshold_ = 3`, but before this request is executed, `removeFromCommittee` is called. After both requests have been executed successfully, we will end up with `committeeThreshold = 3` and `committeeArray.length = 2`, which is clearly not desired.\\n```\\nfunction \\_executeRemoveFromCommittee(address who\\_) private {\\n```\\n\\n```\\nfunction \\_executeChangeCommitteeThreshold(uint256 threshold\\_) private {\\n```\\n
Apply the same validation checks in the functions that execute the state change.
null
```\\nfunction \\_executeRemoveFromCommittee(address who\\_) private {\\n```\\n
Hard-coded minimum deposit amount
low
Resolution\\nFixed in 3bbafab926df0ea39f444ef0fd5d2a6197f99a5d by implementing the auditor's recommendation.\\nThe `deposit` function specifies a minimum amount of 1e12 units of the base token for a deposit:\\n```\\nuint256 constant AMOUNT\\_PER\\_SHARE = 1e18;\\n```\\n\\n```\\n// Anyone can be a seller, and deposit baseToken (e.g. USDC or WETH)\\n// to the pool.\\nfunction deposit(\\n uint256 amount\\_\\n) external noReenter {\\n require(enabled, "Not enabled");\\n\\n require(amount\\_ >= AMOUNT\\_PER\\_SHARE / 1000000, "Less than minimum");\\n```\\n\\nWhether that's an appropriate minimum amount or not depends on the base token. Note that the two example tokens listed above are USDC and WETH. With current ETH prices, 1e12 Wei cost an affordable 0.2 US Cent. USDC, on the other hand, has 6 decimals, so 1e12 units are worth 1 million USD, which is … steep.
The minimum deposit amount should be configurable.
null
```\\nuint256 constant AMOUNT\\_PER\\_SHARE = 1e18;\\n```\\n
Outdated Solidity version
low
The source files' version pragmas either specify that they need compiler version exactly 0.8.10 or at least 0.8.10:\\n```\\npragma solidity 0.8.10;\\n```\\n\\n```\\npragma solidity ^0.8.10;\\n```\\n\\nSolidity v0.8.10 is a fairly dated version that has known security issues. We generally recommend using the latest version of the compiler (at the time of writing, this is v0.8.20), and we also discourage the use of floating pragmas to make sure that the source files are actually compiled and deployed with the same compiler version they have been tested with.
Resolution\\nFixed in 3bbafab926df0ea39f444ef0fd5d2a6197f99a5d by implementing the auditor's recommendation.\\nUse the Solidity compiler v0.8.20, and change the version pragma in all Solidity source files to `pragma solidity 0.8.20;`.
null
```\\npragma solidity 0.8.10;\\n```\\n
Code used for testing purposes should be removed before deployment
low
Variables and logic have been added to the code whose only purpose is to make it easier to test. This might cause unexpected behavior if deployed in production. For instance, `onlyTest` and `setTimeExtra` should be removed from the code before deployment, as well as `timeExtra` in `getCurrentWeek` and `getNow`.\\n```\\nmodifier onlyTest() {\\n```\\n\\n```\\nfunction setTimeExtra(uint256 timeExtra\\_) external onlyTest {\\n```\\n\\n```\\nfunction getCurrentWeek() public view returns(uint256) {\\n return (block.timestamp + TIME\\_OFFSET + timeExtra) / (7 days);\\n}\\n```\\n\\n```\\nfunction getNow() public view returns(uint256) {\\n return block.timestamp + timeExtra;\\n}\\n```\\n
For the long term, consider mimicking this behavior by using features offered by your testing framework.
null
```\\nmodifier onlyTest() {\\n```\\n
Missing events
low
Some state-changing functions do not emit an event at all or omit relevant information.\\nA. `Pool.setEventAggregator` should emit an event with the value of `eventAggregator_` so that off-chain services will be notified and can automatically adjust.\\n```\\nfunction setEventAggregator(address eventAggregator\\_) external onlyPoolManager {\\n eventAggregator = eventAggregator\\_;\\n}\\n```\\n\\nB. `Pool.enablePool` should emit an event when the pool is dis- or enabled.\\n```\\nfunction enablePool(bool enabled\\_) external onlyPoolManager {\\n enabled = enabled\\_;\\n}\\n```\\n\\nC. `Pool.execute` only logs the `requestIndex_` while it should also include the `operation` and `data` to better reflect the state change in the transaction.\\n```\\nif (eventAggregator != address(0)) {\\n IEventAggregator(eventAggregator).execute(\\n requestIndex\\_\\n );\\n}\\n```\\n
State-changing functions should emit an event to have an audit trail and enable monitoring of smart contract usage.
null
```\\nfunction setEventAggregator(address eventAggregator\\_) external onlyPoolManager {\\n eventAggregator = eventAggregator\\_;\\n}\\n```\\n
addPremium - A Back Runner May Cause an Insurance Holder to Lose Their Refunds by Calling addPremium Right After the Original Call
high
`addPremium` is a public function that can be called by anyone and that distributes the weekly premium payments to the pool manager and the rest of the pool share holders. If the collateral deposited is not enough to cover the total coverage offered to insurance holders for a given week, refunds are allocated pro rata for all insurance holders of that particular week and policy. However, in the current implementation, attackers can call `addPremium` right after the original call to `addPremium` but before the call to refund; this will cause the insurance holders to lose their refunds, which will be effectively locked forever in the contract (unless the contract is upgraded).\\n```\\nrefundMap[policyIndex\\_][week] = incomeMap[policyIndex\\_][week].mul(\\n allCovered.sub(maximumToCover)).div(allCovered);\\n```\\n
`addPremium` should contain a validation check in the beginning of the function that reverts for the case of `incomeMap[policyIndex_][week] = 0`.
null
```\\nrefundMap[policyIndex\\_][week] = incomeMap[policyIndex\\_][week].mul(\\n allCovered.sub(maximumToCover)).div(allCovered);\\n```\\n
refund - Attacker Can Lock Insurance Holder's Refunds by Calling refund Before a Refund Was Allocated
high
`addPremium` is used to determine the `refund` amount that an insurance holder is eligible to claim. The amount is stored in the `refundMap` mapping and can then later be claimed by anyone on behalf of an insurance holder by calling `refund`. The `refund` function can't be called more than once for a given combination of `policyIndex_`, `week_`, and `who_`, as it would revert with an “Already refunded” error. This gives an attacker the opportunity to call `refund` on behalf of any insurance holder with value 0 inside the `refundMap`, causing any future `refund` allocated for that holder in a given week and for a given policy to be locked forever in the contract (unless the contract is upgraded).\\n```\\nfunction refund(\\n uint256 policyIndex\\_,\\n uint256 week\\_,\\n address who\\_\\n) external noReenter {\\n Coverage storage coverage = coverageMap[policyIndex\\_][week\\_][who\\_];\\n\\n require(!coverage.refunded, "Already refunded");\\n\\n uint256 allCovered = coveredMap[policyIndex\\_][week\\_];\\n uint256 amountToRefund = refundMap[policyIndex\\_][week\\_].mul(\\n coverage.amount).div(allCovered);\\n coverage.amount = coverage.amount.mul(\\n coverage.premium.sub(amountToRefund)).div(coverage.premium);\\n coverage.refunded = true;\\n\\n IERC20(baseToken).safeTransfer(who\\_, amountToRefund);\\n\\n if (eventAggregator != address(0)) {\\n IEventAggregator(eventAggregator).refund(\\n policyIndex\\_,\\n week\\_,\\n who\\_,\\n amountToRefund\\n );\\n }\\n}\\n```\\n
There should be a validation check at the beginning of the function that reverts if `refundMap[policyIndex_][week_] == 0`.
null
```\\nfunction refund(\\n uint256 policyIndex\\_,\\n uint256 week\\_,\\n address who\\_\\n) external noReenter {\\n Coverage storage coverage = coverageMap[policyIndex\\_][week\\_][who\\_];\\n\\n require(!coverage.refunded, "Already refunded");\\n\\n uint256 allCovered = coveredMap[policyIndex\\_][week\\_];\\n uint256 amountToRefund = refundMap[policyIndex\\_][week\\_].mul(\\n coverage.amount).div(allCovered);\\n coverage.amount = coverage.amount.mul(\\n coverage.premium.sub(amountToRefund)).div(coverage.premium);\\n coverage.refunded = true;\\n\\n IERC20(baseToken).safeTransfer(who\\_, amountToRefund);\\n\\n if (eventAggregator != address(0)) {\\n IEventAggregator(eventAggregator).refund(\\n policyIndex\\_,\\n week\\_,\\n who\\_,\\n amountToRefund\\n );\\n }\\n}\\n```\\n
addTidal, _updateUserTidal, withdrawTidal - Wrong Arithmetic Calculations
high
To further incentivize sellers, anyone - although it will usually be the pool manager - can send an arbitrary amount of the Tidal token to a pool, which is then supposed to be distributed proportionally among the share owners. There are several flaws in the calculations that implement this mechanism:\\nA. addTidal:\\n```\\npoolInfo.accTidalPerShare = poolInfo.accTidalPerShare.add(\\n amount\\_.mul(SHARE\\_UNITS)).div(poolInfo.totalShare);\\n```\\n\\nThis should be:\\n```\\npoolInfo.accTidalPerShare = poolInfo.accTidalPerShare.add(\\n amount\\_.mul(SHARE\\_UNITS).div(poolInfo.totalShare));\\n```\\n\\nNote the different parenthesization. Without SafeMath:\\n```\\npoolInfo.accTidalPerShare += amount\\_ \\* SHARE\\_UNITS / poolInfo.totalShare;\\n```\\n\\nB. _updateUserTidal:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare.add(\\n userInfo.share).div(SHARE\\_UNITS);\\n```\\n\\nThis should be:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare.mul(\\n userInfo.share).div(SHARE\\_UNITS);\\n```\\n\\nNote that `add` has been replaced with `mul`. Without SafeMath:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare \\* userInfo.share / SHARE\\_UNITS;\\n```\\n\\nC. withdrawTidal:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare.add(userInfo.share);\\n```\\n\\nAs in B, this should be:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare.mul(\\n userInfo.share).div(SHARE\\_UNITS);\\n```\\n\\nNote that `add` has been replaced with `mul` and that a division by `SHARE_UNITS` has been appended. Without SafeMath:\\n```\\nuint256 accAmount = poolInfo.accTidalPerShare \\* userInfo.share / SHARE\\_UNITS;\\n```\\n\\nAs an additional minor point, the division in `addTidal` will revert with a panic (0x12) if the number of shares in the pool is zero. This case could be handled more gracefully.
Implement the fixes described above. The versions without `SafeMath` are easier to read and should be preferred; see issue 3.13.
null
```\\npoolInfo.accTidalPerShare = poolInfo.accTidalPerShare.add(\\n amount\\_.mul(SHARE\\_UNITS)).div(poolInfo.totalShare);\\n```\\n
claim - Incomplete and Lenient Implementation
high
In the current version of the code, the `claim` function is lacking crucial input validation logic as well as required state changes. Most of the process is implemented in other contracts or off-chain at the moment and is therefore out of scope for this audit, but there might still be issues caused by potential errors in the process. Moreover, pool manager and committee together have unlimited ownership of the deposits and can essentially withdraw all collateral to any desired address.\\n```\\nfunction claim(\\n uint256 policyIndex\\_,\\n uint256 amount\\_,\\n address receipient\\_\\n) external onlyPoolManager {\\n```\\n
To ensure a more secure claiming process, we propose adding the following logic to the `claim` function:\\n`refund` should be called at the beginning of the `claim` flow, so that the recipient's true coverage amount will be used.\\n`policyIndex` should be added as a parameter to this function, so that `coverageMap` can be used to validate that the amount claimed on behalf of a recipient is covered.\\nThe payout amount should be subtracted in the `coveredMap` and `coverageMap` mappings.
null
```\\nfunction claim(\\n uint256 policyIndex\\_,\\n uint256 amount\\_,\\n address receipient\\_\\n) external onlyPoolManager {\\n```\\n
buy - Insurance Buyers Trying to Increase Their Coverage Amount Will Lose Their Previous Coverage
high
When a user is willing to `buy` insurance, he is required to specify the desired amount (denoted as amount_) and to pay the entire premium upfront. In return, he receives the ownership over an entry inside the `coverageMap` mapping. If a user calls the `buy` function more than once for the same policy and time frame, his entry in the `coverageMap` will not represent the accumulated amount that he paid for but only the last coverage amount, which means previous coverage will be lost forever (unless the contract is upgraded).\\n```\\nfor (uint256 w = fromWeek\\_; w < toWeek\\_; ++w) {\\n incomeMap[policyIndex\\_][w] =\\n incomeMap[policyIndex\\_][w].add(premium);\\n coveredMap[policyIndex\\_][w] =\\n coveredMap[policyIndex\\_][w].add(amount\\_);\\n\\n require(coveredMap[policyIndex\\_][w] <= maximumToCover,\\n "Not enough to buy");\\n\\n coverageMap[policyIndex\\_][w][\\_msgSender()] = Coverage({\\n amount: amount\\_,\\n premium: premium,\\n refunded: false\\n });\\n}\\n```\\n
The coverage entry that represents the user's coverage should not be overwritten but should hold the accumulated amount of coverage instead.
null
```\\nfor (uint256 w = fromWeek\\_; w < toWeek\\_; ++w) {\\n incomeMap[policyIndex\\_][w] =\\n incomeMap[policyIndex\\_][w].add(premium);\\n coveredMap[policyIndex\\_][w] =\\n coveredMap[policyIndex\\_][w].add(amount\\_);\\n\\n require(coveredMap[policyIndex\\_][w] <= maximumToCover,\\n "Not enough to buy");\\n\\n coverageMap[policyIndex\\_][w][\\_msgSender()] = Coverage({\\n amount: amount\\_,\\n premium: premium,\\n refunded: false\\n });\\n}\\n```\\n
Several Issues Related to Upgradeability of Contracts
medium
We did not find a proxy contract or factory in the repository, but the README contains the following information:\\nREADME.md:L11\\n```\\nEvery Pool is a standalone smart contract. It is made upgradeable with OpenZeppelin's Proxy Upgrade Pattern.\\n```\\n\\nREADME.md:L56\\n```\\nAnd there will be multiple proxies and one implementation of the Pools, and one proxy and one implementation of EventAggregator.\\n```\\n\\nThere are several issues related to upgradeability or, generally, using the contracts as implementations for proxies. All recommendations in this report assume that it is not necessary to remain compatible with an existing deployment.\\nB. If upgradeability is supposed to work with inheritance, there should be dummy variables at the end of each contract in the inheritance hierarchy. Some of these have to be removed when “real” state variables are added. More precisely, it is conventional to use a fixed-size `uint256` array `__gap`, such that the consecutively occupied slots at the beginning (for the “real” state variables) add up to 50 with the size of the array. If state variables are added later, the gap's size has to be reduced accordingly to maintain this invariant. Currently, the contracts do not declare such a `__gap` variable.\\nC. Implementation contracts should not remain uninitalized. To prevent initialization by an attacker - which, in some cases, can have an impact on the proxy - the implementation contract's constructor should call `_disableInitializers`.
Refamiliarize yourself with the subtleties and pitfalls of upgradeable `contracts`, in particular regarding state variables and the storage gap. A lot of useful information can be found here.\\nOnly import from `contracts-upgradeable`, not from `contracts`.\\nAdd appropriately-sized storage gaps at least to `PoolModel`, `NonReentrancy`, and `EventAggregator`. (Note that adding a storage gap to `NonReentrancy` will break compatibility with existing deployments.) Ideally, add comments and warnings to each file that state variables may only be added at the end, that the storage gap's size has to be reduced accordingly, and that state variables must not be removed, rearranged, or in any way altered (e.g., type, `constant`, immutable). No state variables should ever be added to the `Pool` contract, and a comment should make that clear.\\nAdd a constructor to `Pool` and `EventAggregator` that calls `_disableInitializers`.
null
```\\nEvery Pool is a standalone smart contract. It is made upgradeable with OpenZeppelin's Proxy Upgrade Pattern.\\n```\\n
initialize - Committee Members Array Can Contain Duplicates
medium
The initial committee members are given as array argument to the pool's `initialize` function. When the array is processed, there is no check for duplicates, and duplicates may also end up in the storage array `committeeArray`.\\n```\\nfor (uint256 i = 0; i < committeeMembers\\_.length; ++i) {\\n address member = committeeMembers\\_[i];\\n committeeArray.push(member);\\n committeeIndexPlusOne[member] = committeeArray.length;\\n}\\n```\\n\\nDuplicates will result in a discrepancy between the length of the array - which is later interpreted as the number of committee members - and the actual number of (different) committee members. This could lead to more problems, such as an insufficient committee size to reach the threshold.
The `initialize` function should verify in the loop that `member` hasn't been added before. Note that `_executeAddToCommittee` refuses to add someone who is already in the committee, and the same technique can be employed here.
null
```\\nfor (uint256 i = 0; i < committeeMembers\\_.length; ++i) {\\n address member = committeeMembers\\_[i];\\n committeeArray.push(member);\\n committeeIndexPlusOne[member] = committeeArray.length;\\n}\\n```\\n
Pool.buy- Users May End Up Paying More Than Intended Due to Changes in policy.weeklyPremium
medium
The price that an insurance buyer has to pay for insurance is determined by the duration of the coverage and the `weeklyPremium`. The price increases as the `weeklyPremium` increases. If a `buy` transaction is waiting in the mempool but eventually front-run by another transaction that increases `weeklyPremium`, the user will end up paying more than they anticipated for the same insurance coverage (assuming their allowance to the `Pool` contract is unlimited or at least higher than what they expected to pay).\\n```\\nuint256 premium = amount\\_.mul(policy.weeklyPremium).div(RATIO\\_BASE);\\nuint256 allPremium = premium.mul(toWeek\\_.sub(fromWeek\\_));\\n```\\n
Consider adding a parameter for the maximum amount to pay, and make sure that the transaction will revert if `allPremium` is greater than this maximum value.
null
```\\nuint256 premium = amount\\_.mul(policy.weeklyPremium).div(RATIO\\_BASE);\\nuint256 allPremium = premium.mul(toWeek\\_.sub(fromWeek\\_));\\n```\\n
Missing Validation Checks in execute
medium
The `Pool` contract implements a threshold voting mechanism for some changes in the contract state, where either the pool manager or a committee member can propose a change by calling `claim`, `changePoolManager`, `addToCommittee`, `removeFromCommittee`, or `changeCommitteeThreshold`, and then the committee has a time period for voting. If the threshold is reached during this period, then anyone can call `execute` to `execute` the state change.\\nWhile some validation checks are implemented in the proposal phase, this is not enough to ensure that business logic rules around these changes are completely enforced.\\n`_executeRemoveFromCommittee` - While the `removeFromCommittee` function makes sure that `committeeArray.length > committeeThreshold`, i.e., that there should always be enough committee members to reach the threshold, the same validation check is not enforced in `_executeRemoveFromCommittee`. To better illustrate the issue, let's consider the following example: `committeeArray.length = 5`, `committeeThreshold = 4`, and now `removeFromCommittee` is called two times in a row, where the second call is made before the first call reaches the threshold. In this case, both requests will be executed successfully, and we end up with `committeeArray.length = 3` and `committeeThreshold = 4`, which is clearly not desired.\\n`_executeChangeCommitteeThreshold` - Applying the same concept here, this function lacks the validation check of `threshold_ <= committeeArray.length`, leading to the same issue as above. Let's consider the following example: `committeeArray.length = 3`, `committeeThreshold = 2`, and now changeCommitteeThresholdis called with `threshold_ = 3`, but before this request is executed, `removeFromCommittee` is called. After both requests have been executed successfully, we will end up with `committeeThreshold = 3` and `committeeArray.length = 2`, which is clearly not desired.\\n```\\nfunction \\_executeRemoveFromCommittee(address who\\_) private {\\n```\\n\\n```\\nfunction \\_executeChangeCommitteeThreshold(uint256 threshold\\_) private {\\n```\\n
Apply the same validation checks in the functions that execute the state change.
null
```\\nfunction \\_executeRemoveFromCommittee(address who\\_) private {\\n```\\n
Hard-Coded Minimum Deposit Amount
low
Resolution\\nFixed in 3bbafab926df0ea39f444ef0fd5d2a6197f99a5d by implementing the auditor's recommendation.\\nThe `deposit` function specifies a minimum amount of 1e12 units of the base token for a deposit:\\n```\\nuint256 constant AMOUNT\\_PER\\_SHARE = 1e18;\\n```\\n\\n```\\n// Anyone can be a seller, and deposit baseToken (e.g. USDC or WETH)\\n// to the pool.\\nfunction deposit(\\n uint256 amount\\_\\n) external noReenter {\\n require(enabled, "Not enabled");\\n\\n require(amount\\_ >= AMOUNT\\_PER\\_SHARE / 1000000, "Less than minimum");\\n```\\n\\nWhether that's an appropriate minimum amount or not depends on the base token. Note that the two example tokens listed above are USDC and WETH. With current ETH prices, 1e12 Wei cost an affordable 0.2 US Cent. USDC, on the other hand, has 6 decimals, so 1e12 units are worth 1 million USD, which is … steep.
The minimum deposit amount should be configurable.
null
```\\nuint256 constant AMOUNT\\_PER\\_SHARE = 1e18;\\n```\\n
Outdated Solidity Version
low
The source files' version pragmas either specify that they need compiler version exactly 0.8.10 or at least 0.8.10:\\n```\\npragma solidity 0.8.10;\\n```\\n\\n```\\npragma solidity ^0.8.10;\\n```\\n\\nSolidity v0.8.10 is a fairly dated version that has known security issues. We generally recommend using the latest version of the compiler (at the time of writing, this is v0.8.20), and we also discourage the use of floating pragmas to make sure that the source files are actually compiled and deployed with the same compiler version they have been tested with.
Resolution\\nFixed in 3bbafab926df0ea39f444ef0fd5d2a6197f99a5d by implementing the auditor's recommendation.\\nUse the Solidity compiler v0.8.20, and change the version pragma in all Solidity source files to `pragma solidity 0.8.20;`.
null
```\\npragma solidity 0.8.10;\\n```\\n
Code Used for Testing Purposes Should Be Removed Before Deployment
low
Variables and logic have been added to the code whose only purpose is to make it easier to test. This might cause unexpected behavior if deployed in production. For instance, `onlyTest` and `setTimeExtra` should be removed from the code before deployment, as well as `timeExtra` in `getCurrentWeek` and `getNow`.\\n```\\nmodifier onlyTest() {\\n```\\n\\n```\\nfunction setTimeExtra(uint256 timeExtra\\_) external onlyTest {\\n```\\n\\n```\\nfunction getCurrentWeek() public view returns(uint256) {\\n return (block.timestamp + TIME\\_OFFSET + timeExtra) / (7 days);\\n}\\n```\\n\\n```\\nfunction getNow() public view returns(uint256) {\\n return block.timestamp + timeExtra;\\n}\\n```\\n
For the long term, consider mimicking this behavior by using features offered by your testing framework.
null
```\\nmodifier onlyTest() {\\n```\\n
Missing Events
low
Some state-changing functions do not emit an event at all or omit relevant information.\\nA. `Pool.setEventAggregator` should emit an event with the value of `eventAggregator_` so that off-chain services will be notified and can automatically adjust.\\n```\\nfunction setEventAggregator(address eventAggregator\\_) external onlyPoolManager {\\n eventAggregator = eventAggregator\\_;\\n}\\n```\\n\\nB. `Pool.enablePool` should emit an event when the pool is dis- or enabled.\\n```\\nfunction enablePool(bool enabled\\_) external onlyPoolManager {\\n enabled = enabled\\_;\\n}\\n```\\n\\nC. `Pool.execute` only logs the `requestIndex_` while it should also include the `operation` and `data` to better reflect the state change in the transaction.\\n```\\nif (eventAggregator != address(0)) {\\n IEventAggregator(eventAggregator).execute(\\n requestIndex\\_\\n );\\n}\\n```\\n
State-changing functions should emit an event to have an audit trail and enable monitoring of smart contract usage.
null
```\\nfunction setEventAggregator(address eventAggregator\\_) external onlyPoolManager {\\n eventAggregator = eventAggregator\\_;\\n}\\n```\\n
InfinityPool contract authorization bypass attack
high
An attacker could create their own credential and set the `Agent` ID to `0`, which would bypass the `subjectIsAgentCaller` modifier. The attacker could use this attack to `borrow` funds from the pool, draining any available liquidity. For example, only an `Agent` should be able to `borrow` funds from the pool and call the `borrow` function:\\n```\\nfunction borrow(VerifiableCredential memory vc) external isOpen subjectIsAgentCaller(vc) {\\n // 1e18 => 1 FIL, can't borrow less than 1 FIL\\n if (vc.value < WAD) revert InvalidParams();\\n // can't borrow more than the pool has\\n if (totalBorrowableAssets() < vc.value) revert InsufficientLiquidity();\\n Account memory account = \\_getAccount(vc.subject);\\n // fresh account, set start epoch and epochsPaid to beginning of current window\\n if (account.principal == 0) {\\n uint256 currentEpoch = block.number;\\n account.startEpoch = currentEpoch;\\n account.epochsPaid = currentEpoch;\\n GetRoute.agentPolice(router).addPoolToList(vc.subject, id);\\n }\\n\\n account.principal += vc.value;\\n account.save(router, vc.subject, id);\\n\\n totalBorrowed += vc.value;\\n\\n emit Borrow(vc.subject, vc.value);\\n\\n // interact - here `msg.sender` must be the Agent bc of the `subjectIsAgentCaller` modifier\\n asset.transfer(msg.sender, vc.value);\\n}\\n```\\n\\nThe following modifier checks that the caller is an Agent:\\n```\\nmodifier subjectIsAgentCaller(VerifiableCredential memory vc) {\\n if (\\n GetRoute.agentFactory(router).agents(msg.sender) != vc.subject\\n ) revert Unauthorized();\\n \\_;\\n}\\n```\\n\\nBut if the caller is not an `Agent`, the `GetRoute.agentFactory(router).agents(msg.sender)` will return `0`. And if the `vc.subject` is also zero, the check will be successful with any `msg.sender`. The attacker can also pass an arbitrary `vc.value` as the parameter and steal all the funds from the pool.
Ensure only an `Agent` can call `borrow` and pass the `subjectIsAgentCaller` modifier.
null
```\\nfunction borrow(VerifiableCredential memory vc) external isOpen subjectIsAgentCaller(vc) {\\n // 1e18 => 1 FIL, can't borrow less than 1 FIL\\n if (vc.value < WAD) revert InvalidParams();\\n // can't borrow more than the pool has\\n if (totalBorrowableAssets() < vc.value) revert InsufficientLiquidity();\\n Account memory account = \\_getAccount(vc.subject);\\n // fresh account, set start epoch and epochsPaid to beginning of current window\\n if (account.principal == 0) {\\n uint256 currentEpoch = block.number;\\n account.startEpoch = currentEpoch;\\n account.epochsPaid = currentEpoch;\\n GetRoute.agentPolice(router).addPoolToList(vc.subject, id);\\n }\\n\\n account.principal += vc.value;\\n account.save(router, vc.subject, id);\\n\\n totalBorrowed += vc.value;\\n\\n emit Borrow(vc.subject, vc.value);\\n\\n // interact - here `msg.sender` must be the Agent bc of the `subjectIsAgentCaller` modifier\\n asset.transfer(msg.sender, vc.value);\\n}\\n```\\n
Wrong accounting for totalBorrowed in the InfinityPool.writeOff function
high
Here is a part of the `InfinityPool.writeOff` function:\\n```\\n// transfer the assets into the pool\\n// whatever we couldn't pay back\\nuint256 lostAmt = principalOwed > recoveredFunds ? principalOwed - recoveredFunds : 0;\\n\\nuint256 totalOwed = interestPaid + principalOwed;\\n\\nasset.transferFrom(\\n msg.sender,\\n address(this),\\n totalOwed > recoveredFunds ? recoveredFunds : totalOwed\\n);\\n// write off only what we lost\\ntotalBorrowed -= lostAmt;\\n// set the account with the funds the pool lost\\naccount.principal = lostAmt;\\n\\naccount.save(router, agentID, id);\\n```\\n\\nThe `totalBorrowed` is decreased by the `lostAmt` value. Instead, it should be decreased by the original `account.principal` value to acknowledge the loss.
Resolution\\nFixed.
null
```\\n// transfer the assets into the pool\\n// whatever we couldn't pay back\\nuint256 lostAmt = principalOwed > recoveredFunds ? principalOwed - recoveredFunds : 0;\\n\\nuint256 totalOwed = interestPaid + principalOwed;\\n\\nasset.transferFrom(\\n msg.sender,\\n address(this),\\n totalOwed > recoveredFunds ? recoveredFunds : totalOwed\\n);\\n// write off only what we lost\\ntotalBorrowed -= lostAmt;\\n// set the account with the funds the pool lost\\naccount.principal = lostAmt;\\n\\naccount.save(router, agentID, id);\\n```\\n
The beneficiaryWithdrawable function can be called by anyone
high
The `beneficiaryWithdrawable` function is supposed to be called by the Agent when a beneficiary is trying to withdraw funds:\\n```\\nfunction beneficiaryWithdrawable(\\n address recipient,\\n address sender,\\n uint256 agentID,\\n uint256 proposedAmount\\n) external returns (\\n uint256 amount\\n) {\\n AgentBeneficiary memory beneficiary = \\_agentBeneficiaries[agentID];\\n address benneficiaryAddress = beneficiary.active.beneficiary;\\n // If the sender is not the owner of the Agent or the beneficiary, revert\\n if(\\n !(benneficiaryAddress == sender || (IAuth(msg.sender).owner() == sender && recipient == benneficiaryAddress) )) {\\n revert Unauthorized();\\n }\\n (\\n beneficiary,\\n amount\\n ) = beneficiary.withdraw(proposedAmount);\\n // update the beneficiary in storage\\n \\_agentBeneficiaries[agentID] = beneficiary;\\n}\\n```\\n\\nThis function reduces the quota that is supposed to be transferred during the `withdraw` call:\\n```\\n sendAmount = agentPolice.beneficiaryWithdrawable(receiver, msg.sender, id, sendAmount);\\n}\\nelse if (msg.sender != owner()) {\\n revert Unauthorized();\\n}\\n\\n// unwrap any wfil needed to withdraw\\n\\_poolFundsInFIL(sendAmount);\\n// transfer funds\\npayable(receiver).sendValue(sendAmount);\\n```\\n\\nThe issue is that anyone can call this function directly, and the quota will be reduced without funds being transferred.
Ensure only the Agent can call this function.
null
```\\nfunction beneficiaryWithdrawable(\\n address recipient,\\n address sender,\\n uint256 agentID,\\n uint256 proposedAmount\\n) external returns (\\n uint256 amount\\n) {\\n AgentBeneficiary memory beneficiary = \\_agentBeneficiaries[agentID];\\n address benneficiaryAddress = beneficiary.active.beneficiary;\\n // If the sender is not the owner of the Agent or the beneficiary, revert\\n if(\\n !(benneficiaryAddress == sender || (IAuth(msg.sender).owner() == sender && recipient == benneficiaryAddress) )) {\\n revert Unauthorized();\\n }\\n (\\n beneficiary,\\n amount\\n ) = beneficiary.withdraw(proposedAmount);\\n // update the beneficiary in storage\\n \\_agentBeneficiaries[agentID] = beneficiary;\\n}\\n```\\n
An Agent can borrow even with existing debt in interest payments
medium
To `borrow` funds, an `Agent` has to call the `borrow` function of the pool:\\n```\\nfunction borrow(VerifiableCredential memory vc) external isOpen subjectIsAgentCaller(vc) {\\n // 1e18 => 1 FIL, can't borrow less than 1 FIL\\n if (vc.value < WAD) revert InvalidParams();\\n // can't borrow more than the pool has\\n if (totalBorrowableAssets() < vc.value) revert InsufficientLiquidity();\\n Account memory account = \\_getAccount(vc.subject);\\n // fresh account, set start epoch and epochsPaid to beginning of current window\\n if (account.principal == 0) {\\n uint256 currentEpoch = block.number;\\n account.startEpoch = currentEpoch;\\n account.epochsPaid = currentEpoch;\\n GetRoute.agentPolice(router).addPoolToList(vc.subject, id);\\n }\\n\\n account.principal += vc.value;\\n account.save(router, vc.subject, id);\\n\\n totalBorrowed += vc.value;\\n\\n emit Borrow(vc.subject, vc.value);\\n\\n // interact - here `msg.sender` must be the Agent bc of the `subjectIsAgentCaller` modifier\\n asset.transfer(msg.sender, vc.value);\\n}\\n```\\n\\nLet's assume that the `Agent` already had some funds borrowed. During this function execution, the current debt status is not checked. The principal debt increases after borrowing, but `account.epochsPaid` remains the same. So the pending debt will instantly increase as if the borrowing happened on `account.epochsPaid`.
Ensure the debt is paid when borrowing more funds.
null
```\\nfunction borrow(VerifiableCredential memory vc) external isOpen subjectIsAgentCaller(vc) {\\n // 1e18 => 1 FIL, can't borrow less than 1 FIL\\n if (vc.value < WAD) revert InvalidParams();\\n // can't borrow more than the pool has\\n if (totalBorrowableAssets() < vc.value) revert InsufficientLiquidity();\\n Account memory account = \\_getAccount(vc.subject);\\n // fresh account, set start epoch and epochsPaid to beginning of current window\\n if (account.principal == 0) {\\n uint256 currentEpoch = block.number;\\n account.startEpoch = currentEpoch;\\n account.epochsPaid = currentEpoch;\\n GetRoute.agentPolice(router).addPoolToList(vc.subject, id);\\n }\\n\\n account.principal += vc.value;\\n account.save(router, vc.subject, id);\\n\\n totalBorrowed += vc.value;\\n\\n emit Borrow(vc.subject, vc.value);\\n\\n // interact - here `msg.sender` must be the Agent bc of the `subjectIsAgentCaller` modifier\\n asset.transfer(msg.sender, vc.value);\\n}\\n```\\n
The AgentPolice.distributeLiquidatedFunds() function can have undistributed residual funds
medium
When an Agent is liquidated, the liquidator (owner of the protocol) is supposed to try to redeem as many funds as possible and re-distribute them to the pools:\\n```\\nfunction distributeLiquidatedFunds(uint256 agentID, uint256 amount) external {\\n if (!liquidated[agentID]) revert Unauthorized();\\n\\n // transfer the assets into the pool\\n GetRoute.wFIL(router).transferFrom(msg.sender, address(this), amount);\\n \\_writeOffPools(agentID, amount);\\n}\\n```\\n\\nThe problem is that in the pool, it's accounted that the amount of funds can be larger than the debt. In that case, the pool won't transfer more funds than the pool needs:\\n```\\nuint256 totalOwed = interestPaid + principalOwed;\\n\\nasset.transferFrom(\\n msg.sender,\\n address(this),\\n totalOwed > recoveredFunds ? recoveredFunds : totalOwed\\n);\\n// write off only what we lost\\ntotalBorrowed -= lostAmt;\\n// set the account with the funds the pool lost\\naccount.principal = lostAmt;\\n\\naccount.save(router, agentID, id);\\n\\nemit WriteOff(agentID, recoveredFunds, lostAmt, interestPaid);\\n```\\n\\nIf that happens, the remaining funds will be stuck in the `AgentPolice` contract.
Return the residual funds to the Agent's owner or process them in some way so they are not lost.
null
```\\nfunction distributeLiquidatedFunds(uint256 agentID, uint256 amount) external {\\n if (!liquidated[agentID]) revert Unauthorized();\\n\\n // transfer the assets into the pool\\n GetRoute.wFIL(router).transferFrom(msg.sender, address(this), amount);\\n \\_writeOffPools(agentID, amount);\\n}\\n```\\n
An Agent can be upgraded even if there is no new implementation
medium
Agents can be upgraded to a new implementation, and only the Agent's owner can call the upgrade function:\\n```\\nfunction upgradeAgent(\\n address agent\\n) external returns (address newAgent) {\\n IAgent oldAgent = IAgent(agent);\\n address owner = IAuth(address(oldAgent)).owner();\\n uint256 agentId = agents[agent];\\n // only the Agent's owner can upgrade, and only a registered agent can be upgraded\\n if (owner != msg.sender || agentId == 0) revert Unauthorized();\\n // deploy a new instance of Agent with the same ID and auth\\n newAgent = GetRoute.agentDeployer(router).deploy(\\n router,\\n agentId,\\n owner,\\n IAuth(address(oldAgent)).operator()\\n );\\n // Register the new agent and unregister the old agent\\n agents[newAgent] = agentId;\\n // transfer funds from old agent to new agent and mark old agent as decommissioning\\n oldAgent.decommissionAgent(newAgent);\\n // delete the old agent from the registry\\n agents[agent] = 0;\\n}\\n```\\n\\nThe issue is that the owner can trigger the upgrade even if no new implementation exists. Multiple possible problems derive from it.\\nUpgrading to the current implementation of the Agent will break the logic because the current version is not calling the `migrateMiner` function, so all the miners will stay with the old Agent, and their funds will be lost.\\nThe owner can accidentally trigger multiple upgrades simultaneously, leading to a loss of funds (https://github.com/ConsenSysDiligence/glif-audit-2023-04/issues/2).\\nThe owner also has no control over the new version of the Agent. To increase decentralization, it's better to pass the deployer's address as a parameter additionally.
Ensure the upgrades can only happen when there is a new version of an Agent, and the owner controls this version.
null
```\\nfunction upgradeAgent(\\n address agent\\n) external returns (address newAgent) {\\n IAgent oldAgent = IAgent(agent);\\n address owner = IAuth(address(oldAgent)).owner();\\n uint256 agentId = agents[agent];\\n // only the Agent's owner can upgrade, and only a registered agent can be upgraded\\n if (owner != msg.sender || agentId == 0) revert Unauthorized();\\n // deploy a new instance of Agent with the same ID and auth\\n newAgent = GetRoute.agentDeployer(router).deploy(\\n router,\\n agentId,\\n owner,\\n IAuth(address(oldAgent)).operator()\\n );\\n // Register the new agent and unregister the old agent\\n agents[newAgent] = agentId;\\n // transfer funds from old agent to new agent and mark old agent as decommissioning\\n oldAgent.decommissionAgent(newAgent);\\n // delete the old agent from the registry\\n agents[agent] = 0;\\n}\\n```\\n
Potential re-entrancy issues when upgrading the contracts
low
The protocol doesn't have any built-in re-entrancy protection mechanisms. That mainly explains by using the `wFIL` token, which is not supposed to give that opportunity. And also by carefully using `FIL` transfers.\\nHowever, there are some places in the code where things may go wrong in the future. For example, when upgrading an Agent:\\n```\\nfunction upgradeAgent(\\n address agent\\n) external returns (address newAgent) {\\n IAgent oldAgent = IAgent(agent);\\n address owner = IAuth(address(oldAgent)).owner();\\n uint256 agentId = agents[agent];\\n // only the Agent's owner can upgrade, and only a registered agent can be upgraded\\n if (owner != msg.sender || agentId == 0) revert Unauthorized();\\n // deploy a new instance of Agent with the same ID and auth\\n newAgent = GetRoute.agentDeployer(router).deploy(\\n router,\\n agentId,\\n owner,\\n IAuth(address(oldAgent)).operator()\\n );\\n // Register the new agent and unregister the old agent\\n agents[newAgent] = agentId;\\n // transfer funds from old agent to new agent and mark old agent as decommissioning\\n oldAgent.decommissionAgent(newAgent);\\n // delete the old agent from the registry\\n agents[agent] = 0;\\n}\\n```\\n\\nHere, we see the `oldAgent.decommissionAgent(newAgent);` call happens before the `oldAgent` is deleted. Inside this function, we see:\\n```\\nfunction decommissionAgent(address \\_newAgent) external {\\n // only the agent factory can decommission an agent\\n AuthController.onlyAgentFactory(router, msg.sender);\\n // if the newAgent has a mismatching ID, revert\\n if(IAgent(\\_newAgent).id() != id) revert Unauthorized();\\n // set the newAgent in storage, which marks the upgrade process as starting\\n newAgent = \\_newAgent;\\n uint256 \\_liquidAssets = liquidAssets();\\n // Withdraw all liquid funds from the Agent to the newAgent\\n \\_poolFundsInFIL(\\_liquidAssets);\\n // transfer funds to new agent\\n payable(\\_newAgent).sendValue(\\_liquidAssets);\\n}\\n```\\n\\nHere, the FIL is transferred to a new contract which is currently unimplemented and unknown. Potentially, the fallback function of this contract could trigger a re-entrancy attack. If that's the case, during the execution of this function, there will be two contracts that are active agents with the same ID, and the attacker can try to use that maliciously.
Be very cautious with further implementations of agents and pools. Also, consider using reentrancy protection in public functions.
null
```\\nfunction upgradeAgent(\\n address agent\\n) external returns (address newAgent) {\\n IAgent oldAgent = IAgent(agent);\\n address owner = IAuth(address(oldAgent)).owner();\\n uint256 agentId = agents[agent];\\n // only the Agent's owner can upgrade, and only a registered agent can be upgraded\\n if (owner != msg.sender || agentId == 0) revert Unauthorized();\\n // deploy a new instance of Agent with the same ID and auth\\n newAgent = GetRoute.agentDeployer(router).deploy(\\n router,\\n agentId,\\n owner,\\n IAuth(address(oldAgent)).operator()\\n );\\n // Register the new agent and unregister the old agent\\n agents[newAgent] = agentId;\\n // transfer funds from old agent to new agent and mark old agent as decommissioning\\n oldAgent.decommissionAgent(newAgent);\\n // delete the old agent from the registry\\n agents[agent] = 0;\\n}\\n```\\n
InfinityPool is subject to a donation with inflation attack if emtied.
low
Since `InfinityPool` is an implementation of the ERC4626 vault, it is too susceptible to inflation attacks. An attacker could front-run the first deposit and inflate the share price to an extent where the following deposit will be less than the value of 1 wei of share resulting in 0 shares minted. The attacker could conduct the inflation by means of self-destructing of another contract. In the case of GLIF this attack is less likely on the first pool since GLIF team accepts predeposits so some amount of shares was already minted. We do suggest fixing this issue before the next pool is deployed and no pre-stake is generated.\\n```\\n/\\*//////////////////////////////////////////////////////////////\\n 4626 LOGIC\\n//////////////////////////////////////////////////////////////\\*/\\n\\n/\\*\\*\\n \\* @dev Converts `assets` to shares\\n \\* @param assets The amount of assets to convert\\n \\* @return shares - The amount of shares converted from assets\\n \\*/\\nfunction convertToShares(uint256 assets) public view returns (uint256) {\\n uint256 supply = liquidStakingToken.totalSupply(); // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? assets : assets \\* supply / totalAssets();\\n}\\n\\n/\\*\\*\\n \\* @dev Converts `shares` to assets\\n \\* @param shares The amount of shares to convert\\n \\* @return assets - The amount of assets converted from shares\\n \\*/\\nfunction convertToAssets(uint256 shares) public view returns (uint256) {\\n uint256 supply = liquidStakingToken.totalSupply(); // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? shares : shares \\* totalAssets() / supply;\\n}\\n```\\n
Since the pool does not need to accept donations, the easiest way to handle this case is to use virtual price, where the balance of the contract is duplicated in a separate variable.
null
```\\n/\\*//////////////////////////////////////////////////////////////\\n 4626 LOGIC\\n//////////////////////////////////////////////////////////////\\*/\\n\\n/\\*\\*\\n \\* @dev Converts `assets` to shares\\n \\* @param assets The amount of assets to convert\\n \\* @return shares - The amount of shares converted from assets\\n \\*/\\nfunction convertToShares(uint256 assets) public view returns (uint256) {\\n uint256 supply = liquidStakingToken.totalSupply(); // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? assets : assets \\* supply / totalAssets();\\n}\\n\\n/\\*\\*\\n \\* @dev Converts `shares` to assets\\n \\* @param shares The amount of shares to convert\\n \\* @return assets - The amount of assets converted from shares\\n \\*/\\nfunction convertToAssets(uint256 shares) public view returns (uint256) {\\n uint256 supply = liquidStakingToken.totalSupply(); // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? shares : shares \\* totalAssets() / supply;\\n}\\n```\\n
MaxWithdraw should potentially account for the funds available in the ramp.
low
Since `InfinityPool` is ERC4626 it should also support the `MaxWithdraw` method. According to the EIP it should include any withdrawal limitation that the participant could encounter. At the moment the `MaxWithdraw` function returns the maximum amount of IOU tokens rather than WFIL. Since IOU token is not the `asset` token of the vault, this behavior is not ideal.\\n```\\nfunction maxWithdraw(address owner) public view returns (uint256) {\\n return convertToAssets(liquidStakingToken.balanceOf(owner));\\n}\\n```\\n
We suggest considering returning the maximum amount of WFIL withdrawal which should account for Ramp balance.
null
```\\nfunction maxWithdraw(address owner) public view returns (uint256) {\\n return convertToAssets(liquidStakingToken.balanceOf(owner));\\n}\\n```\\n
The upgradeability of MinerRegistry, AgentPolice, and Agent is overcomplicated and has a hight chance of errors. Acknowledged
low
During the engagement, we have identified a few places that signify that the `Agent`, `MinerRegistry` and `AgentPolice` can be upgraded, for example:\\nAbility to migrate the miner from one version of the Agent to another inside the `migrateMiner`.\\nAbility to `refreshRoutes` that would update the `AgentPolice` and `MinerRegistry` addresses for a given Agent.\\nAbility to `decommission` pool. We believe that this functionality is present it is not very well thought through. For example, both `MinerRegistry` and `AgentPolice` are not upgradable but have mappings inside of them.\\n```\\nmapping(uint256 => bool) public liquidated;\\n\\n/// @notice `\\_poolIDs` maps agentID to the pools they have actively borrowed from\\nmapping(uint256 => uint256[]) private \\_poolIDs;\\n\\n/// @notice `\\_credentialUseBlock` maps signature bytes to when a credential was used\\nmapping(bytes32 => uint256) private \\_credentialUseBlock;\\n\\n/// @notice `\\_agentBeneficiaries` maps an Agent ID to its Beneficiary struct\\nmapping(uint256 => AgentBeneficiary) private \\_agentBeneficiaries;\\n```\\n\\n```\\nmapping(bytes32 => bool) private \\_minerRegistered;\\n\\nmapping(uint256 => uint64[]) private \\_minersByAgent;\\n```\\n\\nThat means that any time these contracts would need to be upgraded, the contents of those mappings will need to be somehow recreated in the new contract. That is not trivial since it is not easy to obtain all values of a mapping. This will also require an additional protocol-controlled setter ala kickstart mapping functions that are not ideal.\\nIn the case of `Agent` if the contract was upgradable there would be no need for a process of migrating miners that can be tedious and opens possibilities for errors. Since protocol has a lot of centralization and trust assumptions already, having upgradability will not contribute to it a lot.\\nWe also believe that during the upgrade of the pool, the PoolToken will stay the same in the new pool. That means that the minting and burning permissions of the share tokens have to be carefully updated or checked in a manner that does not require the address of the pool to be constant. Since we did not have access to this file, we can not check if that is done correctly.
Consider using upgradable contracts or have a solid upgrade plan that is well-tested before an emergency situation occurs.
null
```\\nmapping(uint256 => bool) public liquidated;\\n\\n/// @notice `\\_poolIDs` maps agentID to the pools they have actively borrowed from\\nmapping(uint256 => uint256[]) private \\_poolIDs;\\n\\n/// @notice `\\_credentialUseBlock` maps signature bytes to when a credential was used\\nmapping(bytes32 => uint256) private \\_credentialUseBlock;\\n\\n/// @notice `\\_agentBeneficiaries` maps an Agent ID to its Beneficiary struct\\nmapping(uint256 => AgentBeneficiary) private \\_agentBeneficiaries;\\n```\\n
Mint function in the Infinity pool will emit the incorrect value.
low
In the `InifinityPool` file the `mint` function recomputes the amount of the assets before emitting the event. While this is fine in a lot of cases, that will not always be true. The result of `previewMint` and `convertToAssets` will only be equal while the `totalAssets` and `totalSupply` are equal. For example, this assumption will break after the first liquidation.\\n```\\nfunction mint(uint256 shares, address receiver) public isOpen returns (uint256 assets) {\\n if(shares == 0) revert InvalidParams();\\n // These transfers need to happen before the mint, and this is forcing a higher degree of coupling than is ideal\\n assets = previewMint(shares);\\n asset.transferFrom(msg.sender, address(this), assets);\\n liquidStakingToken.mint(receiver, shares);\\n assets = convertToAssets(shares);\\n emit Deposit(msg.sender, receiver, assets, shares);\\n}\\n```\\n
Use the `assets` value computed by the `previewMint` when emitting the event.
null
```\\nfunction mint(uint256 shares, address receiver) public isOpen returns (uint256 assets) {\\n if(shares == 0) revert InvalidParams();\\n // These transfers need to happen before the mint, and this is forcing a higher degree of coupling than is ideal\\n assets = previewMint(shares);\\n asset.transferFrom(msg.sender, address(this), assets);\\n liquidStakingToken.mint(receiver, shares);\\n assets = convertToAssets(shares);\\n emit Deposit(msg.sender, receiver, assets, shares);\\n}\\n```\\n
Potential overpayment due to rounding imprecision Won't Fix
low
Inside the `InifintyPool` the `pay` function might accept unaccounted files. Imagine a situation where an Agent is trying to repay only the fees portion of the debt. In that case, the following branch will be executed:\\n```\\nif (vc.value <= interestOwed) {\\n // compute the amount of epochs this payment covers\\n // vc.value is not WAD yet, so divWadDown cancels the extra WAD in interestPerEpoch\\n uint256 epochsForward = vc.value.divWadDown(interestPerEpoch);\\n // update the account's `epochsPaid` cursor\\n account.epochsPaid += epochsForward;\\n // since the entire payment is interest, the entire payment is used to compute the fee (principal payments are fee-free)\\n feeBasis = vc.value;\\n} else {\\n```\\n\\nThe issue is if the `value` does not divide by the `interestPerEpoch` exactly, any remainder will remain in the InfinityPool.\\n```\\nuint256 epochsForward = vc.value.divWadDown(interestPerEpoch);\\n```\\n
Since the remainder will most likely not be too large this is not critical, but ideally, those remaining funds would be included in the `refund` variable.
null
```\\nif (vc.value <= interestOwed) {\\n // compute the amount of epochs this payment covers\\n // vc.value is not WAD yet, so divWadDown cancels the extra WAD in interestPerEpoch\\n uint256 epochsForward = vc.value.divWadDown(interestPerEpoch);\\n // update the account's `epochsPaid` cursor\\n account.epochsPaid += epochsForward;\\n // since the entire payment is interest, the entire payment is used to compute the fee (principal payments are fee-free)\\n feeBasis = vc.value;\\n} else {\\n```\\n
jumpStartAccount should be subject to the same approval checks as regular borrow.
low
`InfinityPool` contract has the ability to kick start an account that will have a debt position in this pool.\\n```\\nfunction jumpStartAccount(address receiver, uint256 agentID, uint256 accountPrincipal) external onlyOwner {\\n Account memory account = \\_getAccount(agentID);\\n // if the account is already initialized, revert\\n if (account.principal != 0) revert InvalidState();\\n // create the account\\n account.principal = accountPrincipal;\\n account.startEpoch = block.number;\\n account.epochsPaid = block.number;\\n // save the account\\n account.save(router, agentID, id);\\n // add the pool to the agent's list of borrowed pools\\n GetRoute.agentPolice(router).addPoolToList(agentID, id);\\n // mint the iFIL to the receiver, using principal as the deposit amount\\n liquidStakingToken.mint(receiver, convertToShares(accountPrincipal));\\n // account for the new principal in the total borrowed of the pool\\n totalBorrowed += accountPrincipal;\\n}\\n```\\n
We suggest that this action is subject to the same rules as the standard borrow action. Thus checks on DTE, LTV and DTI should be done if possible.
null
```\\nfunction jumpStartAccount(address receiver, uint256 agentID, uint256 accountPrincipal) external onlyOwner {\\n Account memory account = \\_getAccount(agentID);\\n // if the account is already initialized, revert\\n if (account.principal != 0) revert InvalidState();\\n // create the account\\n account.principal = accountPrincipal;\\n account.startEpoch = block.number;\\n account.epochsPaid = block.number;\\n // save the account\\n account.save(router, agentID, id);\\n // add the pool to the agent's list of borrowed pools\\n GetRoute.agentPolice(router).addPoolToList(agentID, id);\\n // mint the iFIL to the receiver, using principal as the deposit amount\\n liquidStakingToken.mint(receiver, convertToShares(accountPrincipal));\\n // account for the new principal in the total borrowed of the pool\\n totalBorrowed += accountPrincipal;\\n}\\n```\\n
InfinityPool Contract Authorization Bypass Attack
high
An attacker could create their own credential and set the `Agent` ID to `0`, which would bypass the `subjectIsAgentCaller` modifier. The attacker could use this attack to `borrow` funds from the pool, draining any available liquidity. For example, only an `Agent` should be able to `borrow` funds from the pool and call the `borrow` function:\\n```\\nfunction borrow(VerifiableCredential memory vc) external isOpen subjectIsAgentCaller(vc) {\\n // 1e18 => 1 FIL, can't borrow less than 1 FIL\\n if (vc.value < WAD) revert InvalidParams();\\n // can't borrow more than the pool has\\n if (totalBorrowableAssets() < vc.value) revert InsufficientLiquidity();\\n Account memory account = \\_getAccount(vc.subject);\\n // fresh account, set start epoch and epochsPaid to beginning of current window\\n if (account.principal == 0) {\\n uint256 currentEpoch = block.number;\\n account.startEpoch = currentEpoch;\\n account.epochsPaid = currentEpoch;\\n GetRoute.agentPolice(router).addPoolToList(vc.subject, id);\\n }\\n\\n account.principal += vc.value;\\n account.save(router, vc.subject, id);\\n\\n totalBorrowed += vc.value;\\n\\n emit Borrow(vc.subject, vc.value);\\n\\n // interact - here `msg.sender` must be the Agent bc of the `subjectIsAgentCaller` modifier\\n asset.transfer(msg.sender, vc.value);\\n}\\n```\\n\\nThe following modifier checks that the caller is an Agent:\\n```\\nmodifier subjectIsAgentCaller(VerifiableCredential memory vc) {\\n if (\\n GetRoute.agentFactory(router).agents(msg.sender) != vc.subject\\n ) revert Unauthorized();\\n \\_;\\n}\\n```\\n\\nBut if the caller is not an `Agent`, the `GetRoute.agentFactory(router).agents(msg.sender)` will return `0`. And if the `vc.subject` is also zero, the check will be successful with any `msg.sender`. The attacker can also pass an arbitrary `vc.value` as the parameter and steal all the funds from the pool.
Ensure only an `Agent` can call `borrow` and pass the `subjectIsAgentCaller` modifier.
null
```\\nfunction borrow(VerifiableCredential memory vc) external isOpen subjectIsAgentCaller(vc) {\\n // 1e18 => 1 FIL, can't borrow less than 1 FIL\\n if (vc.value < WAD) revert InvalidParams();\\n // can't borrow more than the pool has\\n if (totalBorrowableAssets() < vc.value) revert InsufficientLiquidity();\\n Account memory account = \\_getAccount(vc.subject);\\n // fresh account, set start epoch and epochsPaid to beginning of current window\\n if (account.principal == 0) {\\n uint256 currentEpoch = block.number;\\n account.startEpoch = currentEpoch;\\n account.epochsPaid = currentEpoch;\\n GetRoute.agentPolice(router).addPoolToList(vc.subject, id);\\n }\\n\\n account.principal += vc.value;\\n account.save(router, vc.subject, id);\\n\\n totalBorrowed += vc.value;\\n\\n emit Borrow(vc.subject, vc.value);\\n\\n // interact - here `msg.sender` must be the Agent bc of the `subjectIsAgentCaller` modifier\\n asset.transfer(msg.sender, vc.value);\\n}\\n```\\n
Wrong Accounting for totalBorrowed in the InfinityPool.writeOff Function
high
Here is a part of the `InfinityPool.writeOff` function:\\n```\\n// transfer the assets into the pool\\n// whatever we couldn't pay back\\nuint256 lostAmt = principalOwed > recoveredFunds ? principalOwed - recoveredFunds : 0;\\n\\nuint256 totalOwed = interestPaid + principalOwed;\\n\\nasset.transferFrom(\\n msg.sender,\\n address(this),\\n totalOwed > recoveredFunds ? recoveredFunds : totalOwed\\n);\\n// write off only what we lost\\ntotalBorrowed -= lostAmt;\\n// set the account with the funds the pool lost\\naccount.principal = lostAmt;\\n\\naccount.save(router, agentID, id);\\n```\\n\\nThe `totalBorrowed` is decreased by the `lostAmt` value. Instead, it should be decreased by the original `account.principal` value to acknowledge the loss.
Resolution\\nFixed.
null
```\\n// transfer the assets into the pool\\n// whatever we couldn't pay back\\nuint256 lostAmt = principalOwed > recoveredFunds ? principalOwed - recoveredFunds : 0;\\n\\nuint256 totalOwed = interestPaid + principalOwed;\\n\\nasset.transferFrom(\\n msg.sender,\\n address(this),\\n totalOwed > recoveredFunds ? recoveredFunds : totalOwed\\n);\\n// write off only what we lost\\ntotalBorrowed -= lostAmt;\\n// set the account with the funds the pool lost\\naccount.principal = lostAmt;\\n\\naccount.save(router, agentID, id);\\n```\\n
The beneficiaryWithdrawable Function Can Be Called by Anyone
high
The `beneficiaryWithdrawable` function is supposed to be called by the Agent when a beneficiary is trying to withdraw funds:\\n```\\nfunction beneficiaryWithdrawable(\\n address recipient,\\n address sender,\\n uint256 agentID,\\n uint256 proposedAmount\\n) external returns (\\n uint256 amount\\n) {\\n AgentBeneficiary memory beneficiary = \\_agentBeneficiaries[agentID];\\n address benneficiaryAddress = beneficiary.active.beneficiary;\\n // If the sender is not the owner of the Agent or the beneficiary, revert\\n if(\\n !(benneficiaryAddress == sender || (IAuth(msg.sender).owner() == sender && recipient == benneficiaryAddress) )) {\\n revert Unauthorized();\\n }\\n (\\n beneficiary,\\n amount\\n ) = beneficiary.withdraw(proposedAmount);\\n // update the beneficiary in storage\\n \\_agentBeneficiaries[agentID] = beneficiary;\\n}\\n```\\n\\nThis function reduces the quota that is supposed to be transferred during the `withdraw` call:\\n```\\n sendAmount = agentPolice.beneficiaryWithdrawable(receiver, msg.sender, id, sendAmount);\\n}\\nelse if (msg.sender != owner()) {\\n revert Unauthorized();\\n}\\n\\n// unwrap any wfil needed to withdraw\\n\\_poolFundsInFIL(sendAmount);\\n// transfer funds\\npayable(receiver).sendValue(sendAmount);\\n```\\n\\nThe issue is that anyone can call this function directly, and the quota will be reduced without funds being transferred.
Ensure only the Agent can call this function.
null
```\\nfunction beneficiaryWithdrawable(\\n address recipient,\\n address sender,\\n uint256 agentID,\\n uint256 proposedAmount\\n) external returns (\\n uint256 amount\\n) {\\n AgentBeneficiary memory beneficiary = \\_agentBeneficiaries[agentID];\\n address benneficiaryAddress = beneficiary.active.beneficiary;\\n // If the sender is not the owner of the Agent or the beneficiary, revert\\n if(\\n !(benneficiaryAddress == sender || (IAuth(msg.sender).owner() == sender && recipient == benneficiaryAddress) )) {\\n revert Unauthorized();\\n }\\n (\\n beneficiary,\\n amount\\n ) = beneficiary.withdraw(proposedAmount);\\n // update the beneficiary in storage\\n \\_agentBeneficiaries[agentID] = beneficiary;\\n}\\n```\\n
An Agent Can Borrow Even With Existing Debt in Interest Payments
medium
To `borrow` funds, an `Agent` has to call the `borrow` function of the pool:\\n```\\nfunction borrow(VerifiableCredential memory vc) external isOpen subjectIsAgentCaller(vc) {\\n // 1e18 => 1 FIL, can't borrow less than 1 FIL\\n if (vc.value < WAD) revert InvalidParams();\\n // can't borrow more than the pool has\\n if (totalBorrowableAssets() < vc.value) revert InsufficientLiquidity();\\n Account memory account = \\_getAccount(vc.subject);\\n // fresh account, set start epoch and epochsPaid to beginning of current window\\n if (account.principal == 0) {\\n uint256 currentEpoch = block.number;\\n account.startEpoch = currentEpoch;\\n account.epochsPaid = currentEpoch;\\n GetRoute.agentPolice(router).addPoolToList(vc.subject, id);\\n }\\n\\n account.principal += vc.value;\\n account.save(router, vc.subject, id);\\n\\n totalBorrowed += vc.value;\\n\\n emit Borrow(vc.subject, vc.value);\\n\\n // interact - here `msg.sender` must be the Agent bc of the `subjectIsAgentCaller` modifier\\n asset.transfer(msg.sender, vc.value);\\n}\\n```\\n\\nLet's assume that the `Agent` already had some funds borrowed. During this function execution, the current debt status is not checked. The principal debt increases after borrowing, but `account.epochsPaid` remains the same. So the pending debt will instantly increase as if the borrowing happened on `account.epochsPaid`.
Ensure the debt is paid when borrowing more funds.
null
```\\nfunction borrow(VerifiableCredential memory vc) external isOpen subjectIsAgentCaller(vc) {\\n // 1e18 => 1 FIL, can't borrow less than 1 FIL\\n if (vc.value < WAD) revert InvalidParams();\\n // can't borrow more than the pool has\\n if (totalBorrowableAssets() < vc.value) revert InsufficientLiquidity();\\n Account memory account = \\_getAccount(vc.subject);\\n // fresh account, set start epoch and epochsPaid to beginning of current window\\n if (account.principal == 0) {\\n uint256 currentEpoch = block.number;\\n account.startEpoch = currentEpoch;\\n account.epochsPaid = currentEpoch;\\n GetRoute.agentPolice(router).addPoolToList(vc.subject, id);\\n }\\n\\n account.principal += vc.value;\\n account.save(router, vc.subject, id);\\n\\n totalBorrowed += vc.value;\\n\\n emit Borrow(vc.subject, vc.value);\\n\\n // interact - here `msg.sender` must be the Agent bc of the `subjectIsAgentCaller` modifier\\n asset.transfer(msg.sender, vc.value);\\n}\\n```\\n
The AgentPolice.distributeLiquidatedFunds() Function Can Have Undistributed Residual Funds
medium
When an Agent is liquidated, the liquidator (owner of the protocol) is supposed to try to redeem as many funds as possible and re-distribute them to the pools:\\n```\\nfunction distributeLiquidatedFunds(uint256 agentID, uint256 amount) external {\\n if (!liquidated[agentID]) revert Unauthorized();\\n\\n // transfer the assets into the pool\\n GetRoute.wFIL(router).transferFrom(msg.sender, address(this), amount);\\n \\_writeOffPools(agentID, amount);\\n}\\n```\\n\\nThe problem is that in the pool, it's accounted that the amount of funds can be larger than the debt. In that case, the pool won't transfer more funds than the pool needs:\\n```\\nuint256 totalOwed = interestPaid + principalOwed;\\n\\nasset.transferFrom(\\n msg.sender,\\n address(this),\\n totalOwed > recoveredFunds ? recoveredFunds : totalOwed\\n);\\n// write off only what we lost\\ntotalBorrowed -= lostAmt;\\n// set the account with the funds the pool lost\\naccount.principal = lostAmt;\\n\\naccount.save(router, agentID, id);\\n\\nemit WriteOff(agentID, recoveredFunds, lostAmt, interestPaid);\\n```\\n\\nIf that happens, the remaining funds will be stuck in the `AgentPolice` contract.
Return the residual funds to the Agent's owner or process them in some way so they are not lost.
null
```\\nfunction distributeLiquidatedFunds(uint256 agentID, uint256 amount) external {\\n if (!liquidated[agentID]) revert Unauthorized();\\n\\n // transfer the assets into the pool\\n GetRoute.wFIL(router).transferFrom(msg.sender, address(this), amount);\\n \\_writeOffPools(agentID, amount);\\n}\\n```\\n
An Agent Can Be Upgraded Even if There Is No New Implementation
medium
Agents can be upgraded to a new implementation, and only the Agent's owner can call the upgrade function:\\n```\\nfunction upgradeAgent(\\n address agent\\n) external returns (address newAgent) {\\n IAgent oldAgent = IAgent(agent);\\n address owner = IAuth(address(oldAgent)).owner();\\n uint256 agentId = agents[agent];\\n // only the Agent's owner can upgrade, and only a registered agent can be upgraded\\n if (owner != msg.sender || agentId == 0) revert Unauthorized();\\n // deploy a new instance of Agent with the same ID and auth\\n newAgent = GetRoute.agentDeployer(router).deploy(\\n router,\\n agentId,\\n owner,\\n IAuth(address(oldAgent)).operator()\\n );\\n // Register the new agent and unregister the old agent\\n agents[newAgent] = agentId;\\n // transfer funds from old agent to new agent and mark old agent as decommissioning\\n oldAgent.decommissionAgent(newAgent);\\n // delete the old agent from the registry\\n agents[agent] = 0;\\n}\\n```\\n\\nThe issue is that the owner can trigger the upgrade even if no new implementation exists. Multiple possible problems derive from it.\\nUpgrading to the current implementation of the Agent will break the logic because the current version is not calling the `migrateMiner` function, so all the miners will stay with the old Agent, and their funds will be lost.\\nThe owner can accidentally trigger multiple upgrades simultaneously, leading to a loss of funds (https://github.com/ConsenSysDiligence/glif-audit-2023-04/issues/2).\\nThe owner also has no control over the new version of the Agent. To increase decentralization, it's better to pass the deployer's address as a parameter additionally.
Ensure the upgrades can only happen when there is a new version of an Agent, and the owner controls this version.
null
```\\nfunction upgradeAgent(\\n address agent\\n) external returns (address newAgent) {\\n IAgent oldAgent = IAgent(agent);\\n address owner = IAuth(address(oldAgent)).owner();\\n uint256 agentId = agents[agent];\\n // only the Agent's owner can upgrade, and only a registered agent can be upgraded\\n if (owner != msg.sender || agentId == 0) revert Unauthorized();\\n // deploy a new instance of Agent with the same ID and auth\\n newAgent = GetRoute.agentDeployer(router).deploy(\\n router,\\n agentId,\\n owner,\\n IAuth(address(oldAgent)).operator()\\n );\\n // Register the new agent and unregister the old agent\\n agents[newAgent] = agentId;\\n // transfer funds from old agent to new agent and mark old agent as decommissioning\\n oldAgent.decommissionAgent(newAgent);\\n // delete the old agent from the registry\\n agents[agent] = 0;\\n}\\n```\\n
Potential Re-Entrancy Issues When Upgrading the Contracts
low
The protocol doesn't have any built-in re-entrancy protection mechanisms. That mainly explains by using the `wFIL` token, which is not supposed to give that opportunity. And also by carefully using `FIL` transfers.\\nHowever, there are some places in the code where things may go wrong in the future. For example, when upgrading an Agent:\\n```\\nfunction upgradeAgent(\\n address agent\\n) external returns (address newAgent) {\\n IAgent oldAgent = IAgent(agent);\\n address owner = IAuth(address(oldAgent)).owner();\\n uint256 agentId = agents[agent];\\n // only the Agent's owner can upgrade, and only a registered agent can be upgraded\\n if (owner != msg.sender || agentId == 0) revert Unauthorized();\\n // deploy a new instance of Agent with the same ID and auth\\n newAgent = GetRoute.agentDeployer(router).deploy(\\n router,\\n agentId,\\n owner,\\n IAuth(address(oldAgent)).operator()\\n );\\n // Register the new agent and unregister the old agent\\n agents[newAgent] = agentId;\\n // transfer funds from old agent to new agent and mark old agent as decommissioning\\n oldAgent.decommissionAgent(newAgent);\\n // delete the old agent from the registry\\n agents[agent] = 0;\\n}\\n```\\n\\nHere, we see the `oldAgent.decommissionAgent(newAgent);` call happens before the `oldAgent` is deleted. Inside this function, we see:\\n```\\nfunction decommissionAgent(address \\_newAgent) external {\\n // only the agent factory can decommission an agent\\n AuthController.onlyAgentFactory(router, msg.sender);\\n // if the newAgent has a mismatching ID, revert\\n if(IAgent(\\_newAgent).id() != id) revert Unauthorized();\\n // set the newAgent in storage, which marks the upgrade process as starting\\n newAgent = \\_newAgent;\\n uint256 \\_liquidAssets = liquidAssets();\\n // Withdraw all liquid funds from the Agent to the newAgent\\n \\_poolFundsInFIL(\\_liquidAssets);\\n // transfer funds to new agent\\n payable(\\_newAgent).sendValue(\\_liquidAssets);\\n}\\n```\\n\\nHere, the FIL is transferred to a new contract which is currently unimplemented and unknown. Potentially, the fallback function of this contract could trigger a re-entrancy attack. If that's the case, during the execution of this function, there will be two contracts that are active agents with the same ID, and the attacker can try to use that maliciously.
Be very cautious with further implementations of agents and pools. Also, consider using reentrancy protection in public functions.
null
```\\nfunction upgradeAgent(\\n address agent\\n) external returns (address newAgent) {\\n IAgent oldAgent = IAgent(agent);\\n address owner = IAuth(address(oldAgent)).owner();\\n uint256 agentId = agents[agent];\\n // only the Agent's owner can upgrade, and only a registered agent can be upgraded\\n if (owner != msg.sender || agentId == 0) revert Unauthorized();\\n // deploy a new instance of Agent with the same ID and auth\\n newAgent = GetRoute.agentDeployer(router).deploy(\\n router,\\n agentId,\\n owner,\\n IAuth(address(oldAgent)).operator()\\n );\\n // Register the new agent and unregister the old agent\\n agents[newAgent] = agentId;\\n // transfer funds from old agent to new agent and mark old agent as decommissioning\\n oldAgent.decommissionAgent(newAgent);\\n // delete the old agent from the registry\\n agents[agent] = 0;\\n}\\n```\\n
InfinityPool Is Subject to a Donation With Inflation Attack if Emtied.
low
Since `InfinityPool` is an implementation of the ERC4626 vault, it is too susceptible to inflation attacks. An attacker could front-run the first deposit and inflate the share price to an extent where the following deposit will be less than the value of 1 wei of share resulting in 0 shares minted. The attacker could conduct the inflation by means of self-destructing of another contract. In the case of GLIF this attack is less likely on the first pool since GLIF team accepts predeposits so some amount of shares was already minted. We do suggest fixing this issue before the next pool is deployed and no pre-stake is generated.\\n```\\n/\\*//////////////////////////////////////////////////////////////\\n 4626 LOGIC\\n//////////////////////////////////////////////////////////////\\*/\\n\\n/\\*\\*\\n \\* @dev Converts `assets` to shares\\n \\* @param assets The amount of assets to convert\\n \\* @return shares - The amount of shares converted from assets\\n \\*/\\nfunction convertToShares(uint256 assets) public view returns (uint256) {\\n uint256 supply = liquidStakingToken.totalSupply(); // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? assets : assets \\* supply / totalAssets();\\n}\\n\\n/\\*\\*\\n \\* @dev Converts `shares` to assets\\n \\* @param shares The amount of shares to convert\\n \\* @return assets - The amount of assets converted from shares\\n \\*/\\nfunction convertToAssets(uint256 shares) public view returns (uint256) {\\n uint256 supply = liquidStakingToken.totalSupply(); // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? shares : shares \\* totalAssets() / supply;\\n}\\n```\\n
Since the pool does not need to accept donations, the easiest way to handle this case is to use virtual price, where the balance of the contract is duplicated in a separate variable.
null
```\\n/\\*//////////////////////////////////////////////////////////////\\n 4626 LOGIC\\n//////////////////////////////////////////////////////////////\\*/\\n\\n/\\*\\*\\n \\* @dev Converts `assets` to shares\\n \\* @param assets The amount of assets to convert\\n \\* @return shares - The amount of shares converted from assets\\n \\*/\\nfunction convertToShares(uint256 assets) public view returns (uint256) {\\n uint256 supply = liquidStakingToken.totalSupply(); // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? assets : assets \\* supply / totalAssets();\\n}\\n\\n/\\*\\*\\n \\* @dev Converts `shares` to assets\\n \\* @param shares The amount of shares to convert\\n \\* @return assets - The amount of assets converted from shares\\n \\*/\\nfunction convertToAssets(uint256 shares) public view returns (uint256) {\\n uint256 supply = liquidStakingToken.totalSupply(); // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? shares : shares \\* totalAssets() / supply;\\n}\\n```\\n
MaxWithdraw Should Potentially Account for the Funds Available in the Ramp.
low
Since `InfinityPool` is ERC4626 it should also support the `MaxWithdraw` method. According to the EIP it should include any withdrawal limitation that the participant could encounter. At the moment the `MaxWithdraw` function returns the maximum amount of IOU tokens rather than WFIL. Since IOU token is not the `asset` token of the vault, this behavior is not ideal.\\n```\\nfunction maxWithdraw(address owner) public view returns (uint256) {\\n return convertToAssets(liquidStakingToken.balanceOf(owner));\\n}\\n```\\n
We suggest considering returning the maximum amount of WFIL withdrawal which should account for Ramp balance.
null
```\\nfunction maxWithdraw(address owner) public view returns (uint256) {\\n return convertToAssets(liquidStakingToken.balanceOf(owner));\\n}\\n```\\n
The Upgradeability of MinerRegistry, AgentPolice, and Agent Is Overcomplicated and Has a Hight Chance of Errors. Acknowledged
low
During the engagement, we have identified a few places that signify that the `Agent`, `MinerRegistry` and `AgentPolice` can be upgraded, for example:\\nAbility to migrate the miner from one version of the Agent to another inside the `migrateMiner`.\\nAbility to `refreshRoutes` that would update the `AgentPolice` and `MinerRegistry` addresses for a given Agent.\\nAbility to `decommission` pool. We believe that this functionality is present it is not very well thought through. For example, both `MinerRegistry` and `AgentPolice` are not upgradable but have mappings inside of them.\\n```\\nmapping(uint256 => bool) public liquidated;\\n\\n/// @notice `\\_poolIDs` maps agentID to the pools they have actively borrowed from\\nmapping(uint256 => uint256[]) private \\_poolIDs;\\n\\n/// @notice `\\_credentialUseBlock` maps signature bytes to when a credential was used\\nmapping(bytes32 => uint256) private \\_credentialUseBlock;\\n\\n/// @notice `\\_agentBeneficiaries` maps an Agent ID to its Beneficiary struct\\nmapping(uint256 => AgentBeneficiary) private \\_agentBeneficiaries;\\n```\\n\\n```\\nmapping(bytes32 => bool) private \\_minerRegistered;\\n\\nmapping(uint256 => uint64[]) private \\_minersByAgent;\\n```\\n\\nThat means that any time these contracts would need to be upgraded, the contents of those mappings will need to be somehow recreated in the new contract. That is not trivial since it is not easy to obtain all values of a mapping. This will also require an additional protocol-controlled setter ala kickstart mapping functions that are not ideal.\\nIn the case of `Agent` if the contract was upgradable there would be no need for a process of migrating miners that can be tedious and opens possibilities for errors. Since protocol has a lot of centralization and trust assumptions already, having upgradability will not contribute to it a lot.\\nWe also believe that during the upgrade of the pool, the PoolToken will stay the same in the new pool. That means that the minting and burning permissions of the share tokens have to be carefully updated or checked in a manner that does not require the address of the pool to be constant. Since we did not have access to this file, we can not check if that is done correctly.
Consider using upgradable contracts or have a solid upgrade plan that is well-tested before an emergency situation occurs.
null
```\\nmapping(uint256 => bool) public liquidated;\\n\\n/// @notice `\\_poolIDs` maps agentID to the pools they have actively borrowed from\\nmapping(uint256 => uint256[]) private \\_poolIDs;\\n\\n/// @notice `\\_credentialUseBlock` maps signature bytes to when a credential was used\\nmapping(bytes32 => uint256) private \\_credentialUseBlock;\\n\\n/// @notice `\\_agentBeneficiaries` maps an Agent ID to its Beneficiary struct\\nmapping(uint256 => AgentBeneficiary) private \\_agentBeneficiaries;\\n```\\n
Mint Function in the Infinity Pool Will Emit the Incorrect Value.
low
In the `InifinityPool` file the `mint` function recomputes the amount of the assets before emitting the event. While this is fine in a lot of cases, that will not always be true. The result of `previewMint` and `convertToAssets` will only be equal while the `totalAssets` and `totalSupply` are equal. For example, this assumption will break after the first liquidation.\\n```\\nfunction mint(uint256 shares, address receiver) public isOpen returns (uint256 assets) {\\n if(shares == 0) revert InvalidParams();\\n // These transfers need to happen before the mint, and this is forcing a higher degree of coupling than is ideal\\n assets = previewMint(shares);\\n asset.transferFrom(msg.sender, address(this), assets);\\n liquidStakingToken.mint(receiver, shares);\\n assets = convertToAssets(shares);\\n emit Deposit(msg.sender, receiver, assets, shares);\\n}\\n```\\n
Use the `assets` value computed by the `previewMint` when emitting the event.
null
```\\nfunction mint(uint256 shares, address receiver) public isOpen returns (uint256 assets) {\\n if(shares == 0) revert InvalidParams();\\n // These transfers need to happen before the mint, and this is forcing a higher degree of coupling than is ideal\\n assets = previewMint(shares);\\n asset.transferFrom(msg.sender, address(this), assets);\\n liquidStakingToken.mint(receiver, shares);\\n assets = convertToAssets(shares);\\n emit Deposit(msg.sender, receiver, assets, shares);\\n}\\n```\\n
Potential Overpayment Due to Rounding Imprecision Won't Fix
low
Inside the `InifintyPool` the `pay` function might accept unaccounted files. Imagine a situation where an Agent is trying to repay only the fees portion of the debt. In that case, the following branch will be executed:\\n```\\nif (vc.value <= interestOwed) {\\n // compute the amount of epochs this payment covers\\n // vc.value is not WAD yet, so divWadDown cancels the extra WAD in interestPerEpoch\\n uint256 epochsForward = vc.value.divWadDown(interestPerEpoch);\\n // update the account's `epochsPaid` cursor\\n account.epochsPaid += epochsForward;\\n // since the entire payment is interest, the entire payment is used to compute the fee (principal payments are fee-free)\\n feeBasis = vc.value;\\n} else {\\n```\\n\\nThe issue is if the `value` does not divide by the `interestPerEpoch` exactly, any remainder will remain in the InfinityPool.\\n```\\nuint256 epochsForward = vc.value.divWadDown(interestPerEpoch);\\n```\\n
Since the remainder will most likely not be too large this is not critical, but ideally, those remaining funds would be included in the `refund` variable.
null
```\\nif (vc.value <= interestOwed) {\\n // compute the amount of epochs this payment covers\\n // vc.value is not WAD yet, so divWadDown cancels the extra WAD in interestPerEpoch\\n uint256 epochsForward = vc.value.divWadDown(interestPerEpoch);\\n // update the account's `epochsPaid` cursor\\n account.epochsPaid += epochsForward;\\n // since the entire payment is interest, the entire payment is used to compute the fee (principal payments are fee-free)\\n feeBasis = vc.value;\\n} else {\\n```\\n
jumpStartAccount Should Be Subject to the Same Approval Checks as Regular Borrow.
low
`InfinityPool` contract has the ability to kick start an account that will have a debt position in this pool.\\n```\\nfunction jumpStartAccount(address receiver, uint256 agentID, uint256 accountPrincipal) external onlyOwner {\\n Account memory account = \\_getAccount(agentID);\\n // if the account is already initialized, revert\\n if (account.principal != 0) revert InvalidState();\\n // create the account\\n account.principal = accountPrincipal;\\n account.startEpoch = block.number;\\n account.epochsPaid = block.number;\\n // save the account\\n account.save(router, agentID, id);\\n // add the pool to the agent's list of borrowed pools\\n GetRoute.agentPolice(router).addPoolToList(agentID, id);\\n // mint the iFIL to the receiver, using principal as the deposit amount\\n liquidStakingToken.mint(receiver, convertToShares(accountPrincipal));\\n // account for the new principal in the total borrowed of the pool\\n totalBorrowed += accountPrincipal;\\n}\\n```\\n
We suggest that this action is subject to the same rules as the standard borrow action. Thus checks on DTE, LTV and DTI should be done if possible.
null
```\\nfunction jumpStartAccount(address receiver, uint256 agentID, uint256 accountPrincipal) external onlyOwner {\\n Account memory account = \\_getAccount(agentID);\\n // if the account is already initialized, revert\\n if (account.principal != 0) revert InvalidState();\\n // create the account\\n account.principal = accountPrincipal;\\n account.startEpoch = block.number;\\n account.epochsPaid = block.number;\\n // save the account\\n account.save(router, agentID, id);\\n // add the pool to the agent's list of borrowed pools\\n GetRoute.agentPolice(router).addPoolToList(agentID, id);\\n // mint the iFIL to the receiver, using principal as the deposit amount\\n liquidStakingToken.mint(receiver, convertToShares(accountPrincipal));\\n // account for the new principal in the total borrowed of the pool\\n totalBorrowed += accountPrincipal;\\n}\\n```\\n
Potential Reentrancy Into Strategies
medium
The `StrategyManager` contract is the entry point for deposits into and withdrawals from strategies. More specifically, to `deposit` into a strategy, a staker calls `depositIntoStrategy` (or anyone calls `depositIntoStrategyWithSignature` with the staker's signature) then the asset is transferred from the staker to the strategy contract. After that, the strategy's `deposit` function is called, followed by some bookkeeping in the `StrategyManager`. For withdrawals (and slashing), the `StrategyManager` calls the strategy's `withdraw` function, which transfers the given amount of the asset to the given recipient. Both token transfers are a potential source of reentrancy if the token allows it.\\nThe `StrategyManager` uses OpenZeppelin's `ReentrancyGuardUpgradeable` as reentrancy protection, and the relevant functions have a `nonReentrant` modifier. The `StrategyBase` contract - from which concrete strategies should be derived - does not have reentrancy protection. However, the functions `deposit` and `withdraw` can only be called from the `StrategyManager`, so reentering these is impossible.\\nNevertheless, other functions could be reentered, for example, `sharesToUnderlyingView` and `underlyingToSharesView`, as well as their (supposedly) non-view counterparts.\\nLet's look at the `withdraw` function in `StrategyBase`. First, the `amountShares` shares are burnt, and at the end of the function, the equivalent amount of `token` is transferred to the depositor:\\n```\\nfunction withdraw(address depositor, IERC20 token, uint256 amountShares)\\n external\\n virtual\\n override\\n onlyWhenNotPaused(PAUSED\\_WITHDRAWALS)\\n onlyStrategyManager\\n{\\n require(token == underlyingToken, "StrategyBase.withdraw: Can only withdraw the strategy token");\\n // copy `totalShares` value to memory, prior to any decrease\\n uint256 priorTotalShares = totalShares;\\n require(\\n amountShares <= priorTotalShares,\\n "StrategyBase.withdraw: amountShares must be less than or equal to totalShares"\\n );\\n\\n // Calculate the value that `totalShares` will decrease to as a result of the withdrawal\\n uint256 updatedTotalShares = priorTotalShares - amountShares;\\n // check to avoid edge case where share rate can be massively inflated as a 'griefing' sort of attack\\n require(updatedTotalShares >= MIN\\_NONZERO\\_TOTAL\\_SHARES || updatedTotalShares == 0,\\n "StrategyBase.withdraw: updated totalShares amount would be nonzero but below MIN\\_NONZERO\\_TOTAL\\_SHARES");\\n // Actually decrease the `totalShares` value\\n totalShares = updatedTotalShares;\\n\\n /\\*\\*\\n \\* @notice calculation of amountToSend \\*mirrors\\* `sharesToUnderlying(amountShares)`, but is different since the `totalShares` has already\\n \\* been decremented. Specifically, notice how we use `priorTotalShares` here instead of `totalShares`.\\n \\*/\\n uint256 amountToSend;\\n if (priorTotalShares == amountShares) {\\n amountToSend = \\_tokenBalance();\\n } else {\\n amountToSend = (\\_tokenBalance() \\* amountShares) / priorTotalShares;\\n }\\n\\n underlyingToken.safeTransfer(depositor, amountToSend);\\n}\\n```\\n\\nIf we assume that the `token` contract has a callback to the recipient of the transfer before the actual balance changes take place, then the recipient could reenter the strategy contract, for example, in sharesToUnderlyingView:\\n```\\nfunction sharesToUnderlyingView(uint256 amountShares) public view virtual override returns (uint256) {\\n if (totalShares == 0) {\\n return amountShares;\\n } else {\\n return (\\_tokenBalance() \\* amountShares) / totalShares;\\n }\\n}\\n```\\n\\nThe crucial point is: If the callback is executed before the actual balance change, then `sharesToUnderlyingView` will report a bad result because the shares have already been burnt. Still, the token balance has not been updated yet.\\nFor deposits, the token transfer to the strategy happens first, and the shares are minted after that:\\n```\\nfunction \\_depositIntoStrategy(address depositor, IStrategy strategy, IERC20 token, uint256 amount)\\n internal\\n onlyStrategiesWhitelistedForDeposit(strategy)\\n returns (uint256 shares)\\n{\\n // transfer tokens from the sender to the strategy\\n token.safeTransferFrom(msg.sender, address(strategy), amount);\\n\\n // deposit the assets into the specified strategy and get the equivalent amount of shares in that strategy\\n shares = strategy.deposit(token, amount);\\n```\\n\\n```\\nfunction deposit(IERC20 token, uint256 amount)\\n external\\n virtual\\n override\\n onlyWhenNotPaused(PAUSED\\_DEPOSITS)\\n onlyStrategyManager\\n returns (uint256 newShares)\\n{\\n require(token == underlyingToken, "StrategyBase.deposit: Can only deposit underlyingToken");\\n\\n /\\*\\*\\n \\* @notice calculation of newShares \\*mirrors\\* `underlyingToShares(amount)`, but is different since the balance of `underlyingToken`\\n \\* has already been increased due to the `strategyManager` transferring tokens to this strategy prior to calling this function\\n \\*/\\n uint256 priorTokenBalance = \\_tokenBalance() - amount;\\n if (priorTokenBalance == 0 || totalShares == 0) {\\n newShares = amount;\\n } else {\\n newShares = (amount \\* totalShares) / priorTokenBalance;\\n }\\n\\n // checks to ensure correctness / avoid edge case where share rate can be massively inflated as a 'griefing' sort of attack\\n require(newShares != 0, "StrategyBase.deposit: newShares cannot be zero");\\n uint256 updatedTotalShares = totalShares + newShares;\\n require(updatedTotalShares >= MIN\\_NONZERO\\_TOTAL\\_SHARES,\\n "StrategyBase.deposit: updated totalShares amount would be nonzero but below MIN\\_NONZERO\\_TOTAL\\_SHARES");\\n\\n // update total share amount\\n totalShares = updatedTotalShares;\\n return newShares;\\n}\\n```\\n\\nThat means if there is a callback in the token's `transferFrom` function and it is executed after the balance change, a reentering call to `sharesToUnderlyingView` (for example) will again return a wrong result because shares and token balances are not “in sync.”\\nIn addition to the reversed order of token transfer and shares update, there's another vital difference between `withdraw` and deposit: For withdrawals, the call to the token contract originates in the strategy, while for deposits, it is the strategy manager that initiates the call to the token contract (before calling into the strategy). That's a technicality that has consequences for reentrancy protection: Note that for withdrawals, it is the strategy contract that is reentered, while for deposits, there is not a single contract that is reentered; instead, it is the contract system that is in an inconsistent state when the reentrancy happens. Hence, reentrancy protection on the level of individual contracts is not sufficient.\\nFinally, we want to discuss though which functions in the strategy contract the system could be reentered. As mentioned, `deposit` and `withdraw` can only be called by the strategy manager, so these two can be ruled out. For the examples above, we considered `sharesToUnderlyingView`, which (as the name suggests) is a `view` function. As such, it can't change the state of the contract, so reentrancy through a `view` function can only be a problem for other contracts that use this function and rely on its return value. However, there is also a potentially state-changing variant, `sharesToUnderlying`, and similar potentially state-changing functions, such as `underlyingToShares` and `userUnderlying`. Currently, these functions are not actually state-changing, but the idea is that they could be and, in some concrete strategy implementations that inherit from `StrategyBase`, will be. In such cases, these functions could make wrong state changes due to state inconsistency during reentrancy.\\nThe examples above assume that the token contract allows reentrancy through its `transfer` function before the balance change has been made or in its `transferFrom` function after. It might be tempting to argue that tokens which don't fall into this category are safe to use. While the examples discussed above are the most interesting attack vectors we found, there might still be others: To illustrate this point, assume a token contract that allows reentrancy through `transferFrom` only before any state change in the token takes place. The token `transfer` is the first thing that happens in `StrategyManager._depositIntoStrategy`, and the state changes (user shares) and calling the strategy's `deposit` function occur later, this might look safe. However, if the `deposit` happens via `StrategyManager.depositIntoStrategyWithSignature`, then it can be seen, for example, that the staker's nonce is updated before the internal `_depositIntoStrategy` function is called:\\n```\\nfunction depositIntoStrategyWithSignature(\\n IStrategy strategy,\\n IERC20 token,\\n uint256 amount,\\n address staker,\\n uint256 expiry,\\n bytes memory signature\\n)\\n external\\n onlyWhenNotPaused(PAUSED\\_DEPOSITS)\\n onlyNotFrozen(staker)\\n nonReentrant\\n returns (uint256 shares)\\n{\\n require(\\n expiry >= block.timestamp,\\n "StrategyManager.depositIntoStrategyWithSignature: signature expired"\\n );\\n // calculate struct hash, then increment `staker`'s nonce\\n uint256 nonce = nonces[staker];\\n bytes32 structHash = keccak256(abi.encode(DEPOSIT\\_TYPEHASH, strategy, token, amount, nonce, expiry));\\n unchecked {\\n nonces[staker] = nonce + 1;\\n }\\n bytes32 digestHash = keccak256(abi.encodePacked("\\x19\\x01", DOMAIN\\_SEPARATOR, structHash));\\n\\n\\n /\\*\\*\\n \\* check validity of signature:\\n \\* 1) if `staker` is an EOA, then `signature` must be a valid ECSDA signature from `staker`,\\n \\* indicating their intention for this action\\n \\* 2) if `staker` is a contract, then `signature` must will be checked according to EIP-1271\\n \\*/\\n if (Address.isContract(staker)) {\\n require(IERC1271(staker).isValidSignature(digestHash, signature) == ERC1271\\_MAGICVALUE,\\n "StrategyManager.depositIntoStrategyWithSignature: ERC1271 signature verification failed");\\n } else {\\n require(ECDSA.recover(digestHash, signature) == staker,\\n "StrategyManager.depositIntoStrategyWithSignature: signature not from staker");\\n }\\n\\n shares = \\_depositIntoStrategy(staker, strategy, token, amount);\\n}\\n```\\n\\nHence, querying the staker's nonce in reentrancy would still give a result based on an “incomplete state change.” It is, for example, conceivable that the staker still has zero shares, and yet their nonce is already 1. This particular situation is most likely not an issue, but the example shows that reentrancy can be subtle.
This is fine if the token doesn't allow reentrancy in the first place. As discussed above, among the tokens that do allow reentrancy, some variants of when reentrancy can happen in relation to state changes in the token seem more dangerous than others, but we have also argued that this kind of reasoning can be dangerous and error-prone. Hence, we recommend employing comprehensive and defensive reentrancy protection based on reentrancy guards such as OpenZeppelin's ReentrancyGuardUpgradeable, which is already used in the `StrategyManager`.\\nUnfortunately, securing a multi-contract system against reentrancy can be challenging, but we hope the preceding discussion and the following pointers will prove helpful:\\nExternal functions in strategies that should only be callable by the strategy manager (such as `deposit` and withdraw) should have the `onlyStrategyManager` modifier. This is already the case in the current codebase and is listed here only for completeness.\\nExternal functions in strategies for which item 1 doesn't apply (such as `sharesToUnderlying` and underlyingToShares) should query the strategy manager's reentrancy lock and revert if it is set.\\nIn principle, the restrictions above also apply to `public` functions, but if a `public` function is also used internally, checks against reentrancy can cause problems (if used in an `internal` context) or at least be redundant. In the context of reentrancy protection, it is often easier to split `public` functions into an `internal` and an `external` one.\\nIf `view` functions are supposed to give reliable results (either internally - which is typically the case - or for other contracts), they have to be protected too.\\nThe previous item also applies to the StrategyManager: `view` functions that provide correct results should query the reentrancy lock and revert if it is set.\\nSolidity automatically generates getters for `public` state variables. Again, if these (external view) functions must deliver correct results, the same measures must be taken for explicit `view` functions. In practice, the state variable has to become `internal` or `private`, and the getter function must be hand-written.\\nThe `StrategyBase` contract provides some basic functionality. Concrete strategy implementations can inherit from this contract, meaning that some functions may be overridden (and might or might not call the overridden version via super), and new functions might be added. While the guidelines above should be helpful, derived contracts must be reviewed and assessed separately on a case-by-case basis. As mentioned before, reentrancy protection can be challenging, especially in a multi-contract system.
null
```\\nfunction withdraw(address depositor, IERC20 token, uint256 amountShares)\\n external\\n virtual\\n override\\n onlyWhenNotPaused(PAUSED\\_WITHDRAWALS)\\n onlyStrategyManager\\n{\\n require(token == underlyingToken, "StrategyBase.withdraw: Can only withdraw the strategy token");\\n // copy `totalShares` value to memory, prior to any decrease\\n uint256 priorTotalShares = totalShares;\\n require(\\n amountShares <= priorTotalShares,\\n "StrategyBase.withdraw: amountShares must be less than or equal to totalShares"\\n );\\n\\n // Calculate the value that `totalShares` will decrease to as a result of the withdrawal\\n uint256 updatedTotalShares = priorTotalShares - amountShares;\\n // check to avoid edge case where share rate can be massively inflated as a 'griefing' sort of attack\\n require(updatedTotalShares >= MIN\\_NONZERO\\_TOTAL\\_SHARES || updatedTotalShares == 0,\\n "StrategyBase.withdraw: updated totalShares amount would be nonzero but below MIN\\_NONZERO\\_TOTAL\\_SHARES");\\n // Actually decrease the `totalShares` value\\n totalShares = updatedTotalShares;\\n\\n /\\*\\*\\n \\* @notice calculation of amountToSend \\*mirrors\\* `sharesToUnderlying(amountShares)`, but is different since the `totalShares` has already\\n \\* been decremented. Specifically, notice how we use `priorTotalShares` here instead of `totalShares`.\\n \\*/\\n uint256 amountToSend;\\n if (priorTotalShares == amountShares) {\\n amountToSend = \\_tokenBalance();\\n } else {\\n amountToSend = (\\_tokenBalance() \\* amountShares) / priorTotalShares;\\n }\\n\\n underlyingToken.safeTransfer(depositor, amountToSend);\\n}\\n```\\n
StrategyBase - Inflation Attack Prevention Can Lead to Stuck Funds
low
As a defense against what has come to be known as inflation or donation attack in the context of ERC-4626, the `StrategyBase` contract - from which concrete strategy implementations are supposed to inherit - enforces that the amount of shares in existence for a particular strategy is always either 0 or at least a certain minimum amount that is set to 10^9. This mitigates inflation attacks, which require a small total supply of shares to be effective.\\n```\\nuint256 updatedTotalShares = totalShares + newShares;\\nrequire(updatedTotalShares >= MIN\\_NONZERO\\_TOTAL\\_SHARES,\\n "StrategyBase.deposit: updated totalShares amount would be nonzero but below MIN\\_NONZERO\\_TOTAL\\_SHARES");\\n```\\n\\n```\\n// Calculate the value that `totalShares` will decrease to as a result of the withdrawal\\nuint256 updatedTotalShares = priorTotalShares - amountShares;\\n// check to avoid edge case where share rate can be massively inflated as a 'griefing' sort of attack\\nrequire(updatedTotalShares >= MIN\\_NONZERO\\_TOTAL\\_SHARES || updatedTotalShares == 0,\\n "StrategyBase.withdraw: updated totalShares amount would be nonzero but below MIN\\_NONZERO\\_TOTAL\\_SHARES");\\n```\\n\\nThis particular approach has the downside that, in the worst case, a user may be unable to withdraw the underlying asset for up to 10^9 - 1 shares. While the extreme circumstances under which this can happen might be unlikely to occur in a realistic setting and, in many cases, the value of 10^9 - 1 shares may be negligible, this is not ideal.
It isn't easy to give a good general recommendation. None of the suggested mitigations are without a downside, and what's the best choice may also depend on the specific situation. We do, however, feel that alternative approaches that can't lead to stuck funds might be worth considering, especially for a default implementation.\\nOne option is internal accounting, i.e., the strategy keeps track of the number of underlying tokens it owns. It uses this number for conversion rate calculation instead of its balance in the token contract. This avoids the donation attack because sending tokens directly to the strategy will not affect the conversion rate. Moreover, this technique helps prevent reentrancy issues when the EigenLayer state is out of sync with the token contract's state. The downside is higher gas costs and that donating by just sending tokens to the contract is impossible; more specifically, if it happens accidentally, the funds are lost unless there's some special mechanism to recover them.\\nAn alternative approach with virtual shares and assets is presented here, and the document lists pointers to more discussions and proposed solutions.
null
```\\nuint256 updatedTotalShares = totalShares + newShares;\\nrequire(updatedTotalShares >= MIN\\_NONZERO\\_TOTAL\\_SHARES,\\n "StrategyBase.deposit: updated totalShares amount would be nonzero but below MIN\\_NONZERO\\_TOTAL\\_SHARES");\\n```\\n
StrategyWrapper - Functions Shouldn't Be virtual (Out of Scope)
low
The `StrategyWrapper` contract is a straightforward strategy implementation and - as its NatSpec documentation explicitly states - is not designed to be inherited from:\\n```\\n/\\*\\*\\n \\* @title Extremely simple implementation of `IStrategy` interface.\\n \\* @author Layr Labs, Inc.\\n \\* @notice Simple, basic, "do-nothing" Strategy that holds a single underlying token and returns it on withdrawals.\\n \\* Assumes shares are always 1-to-1 with the underlyingToken.\\n \\* @dev Unlike `StrategyBase`, this contract is \\*not\\* designed to be inherited from.\\n \\* @dev This contract is expressly \\*not\\* intended for use with 'fee-on-transfer'-type tokens.\\n \\* Setting the `underlyingToken` to be a fee-on-transfer token may result in improper accounting.\\n \\*/\\ncontract StrategyWrapper is IStrategy {\\n```\\n\\nHowever, all functions in this contract are `virtual`, which only makes sense if inheriting from `StrategyWrapper` is possible.
Assuming the NatSpec documentation is correct, and no contract should inherit from `StrategyWrapper`, remove the `virtual` keyword from all function definitions. Otherwise, fix the documentation.\\nRemark\\nThis contract is out of scope, and this finding is only included because we noticed it accidentally. This does not mean we have reviewed the contract or other out-of-scope files.
null
```\\n/\\*\\*\\n \\* @title Extremely simple implementation of `IStrategy` interface.\\n \\* @author Layr Labs, Inc.\\n \\* @notice Simple, basic, "do-nothing" Strategy that holds a single underlying token and returns it on withdrawals.\\n \\* Assumes shares are always 1-to-1 with the underlyingToken.\\n \\* @dev Unlike `StrategyBase`, this contract is \\*not\\* designed to be inherited from.\\n \\* @dev This contract is expressly \\*not\\* intended for use with 'fee-on-transfer'-type tokens.\\n \\* Setting the `underlyingToken` to be a fee-on-transfer token may result in improper accounting.\\n \\*/\\ncontract StrategyWrapper is IStrategy {\\n```\\n
StrategyBase - Inheritance-Related Issues
low
A. The `StrategyBase` contract defines `view` functions that, given an amount of shares, return the equivalent amount of tokens (sharesToUnderlyingView) and vice versa (underlyingToSharesView). These two functions also have non-view counterparts: `sharesToUnderlying` and `underlyingToShares`, and their NatSpec documentation explicitly states that they should be allowed to make state changes. Given the scope of this engagement, it is unclear if these non-view versions are needed, but assuming they are, this does currently not work as intended.\\nFirst, the interface `IStrategy` declares `underlyingToShares` as `view` (unlike sharesToUnderlying). This means overriding this function in derived contracts is impossible without the `view` modifier. Hence, in `StrategyBase` - which implements the `IStrategy` interface - this (virtual) function is (and has to be) `view`. The same applies to overridden versions of this function in contracts inherited from `StrategyBase`.\\n```\\n/\\*\\*\\n \\* @notice Used to convert an amount of underlying tokens to the equivalent amount of shares in this strategy.\\n \\* @notice In contrast to `underlyingToSharesView`, this function \\*\\*may\\*\\* make state modifications\\n \\* @param amountUnderlying is the amount of `underlyingToken` to calculate its conversion into strategy shares\\n \\* @dev Implementation for these functions in particular may vary signifcantly for different strategies\\n \\*/\\nfunction underlyingToShares(uint256 amountUnderlying) external view returns (uint256);\\n```\\n\\n```\\n/\\*\\*\\n \\* @notice Used to convert an amount of underlying tokens to the equivalent amount of shares in this strategy.\\n \\* @notice In contrast to `underlyingToSharesView`, this function \\*\\*may\\*\\* make state modifications\\n \\* @param amountUnderlying is the amount of `underlyingToken` to calculate its conversion into strategy shares\\n \\* @dev Implementation for these functions in particular may vary signifcantly for different strategies\\n \\*/\\nfunction underlyingToShares(uint256 amountUnderlying) external view virtual returns (uint256) {\\n return underlyingToSharesView(amountUnderlying);\\n}\\n```\\n\\nAs mentioned above, the `sharesToUnderlying` function does not have the `view` modifier in the interface `IStrategy`. However, the overridden (and virtual) version in `StrategyBase` does, which means again that overriding this function in contracts inherited from `StrategyBase` is impossible without the `view` modifier.\\n```\\n/\\*\\*\\n \\* @notice Used to convert a number of shares to the equivalent amount of underlying tokens for this strategy.\\n \\* @notice In contrast to `sharesToUnderlyingView`, this function \\*\\*may\\*\\* make state modifications\\n \\* @param amountShares is the amount of shares to calculate its conversion into the underlying token\\n \\* @dev Implementation for these functions in particular may vary signifcantly for different strategies\\n \\*/\\nfunction sharesToUnderlying(uint256 amountShares) public view virtual override returns (uint256) {\\n return sharesToUnderlyingView(amountShares);\\n}\\n```\\n\\nB. The `initialize` function in the `StrategyBase` contract is not virtual, which means the name will not be available in derived contracts (unless with different parameter types). It also has the `initializer` modifier, which is unavailable in concrete strategies inherited from `StrategyBase`.
A. If state-changing versions of the conversion functions are needed, the `view` modifier has to be removed from `IStrategy.underlyingToShares`, `StrategyBase.underlyingToShares`, and `StrategyBase.sharesToUnderlying`. They should be removed entirely from the interface and base contract if they're not needed.\\nB. Consider making the `StrategyBase` contract `abstract`, maybe give the `initialize` function a more specific name such as `_initializeStrategyBase`, change its visibility to `internal`, and use the `onlyInitializing` modifier instead of `initializer`.
null
```\\n/\\*\\*\\n \\* @notice Used to convert an amount of underlying tokens to the equivalent amount of shares in this strategy.\\n \\* @notice In contrast to `underlyingToSharesView`, this function \\*\\*may\\*\\* make state modifications\\n \\* @param amountUnderlying is the amount of `underlyingToken` to calculate its conversion into strategy shares\\n \\* @dev Implementation for these functions in particular may vary signifcantly for different strategies\\n \\*/\\nfunction underlyingToShares(uint256 amountUnderlying) external view returns (uint256);\\n```\\n
StrategyManager - Cross-Chain Replay Attacks After Chain Split Due to Hard-Coded DOMAIN_SEPARATOR
low
A. The `StrategyManager` contract allows stakers to deposit into and withdraw from strategies. A staker can either deposit themself or have someone else do it on their behalf, where the latter requires an EIP-712-compliant signature. The EIP-712 domain separator is computed in the `initialize` function and stored in a state variable for later retrieval:\\n```\\n/// @notice EIP-712 Domain separator\\nbytes32 public DOMAIN\\_SEPARATOR;\\n```\\n\\n```\\nfunction initialize(address initialOwner, address initialStrategyWhitelister, IPauserRegistry \\_pauserRegistry, uint256 initialPausedStatus, uint256 \\_withdrawalDelayBlocks)\\n external\\n initializer\\n{\\n DOMAIN\\_SEPARATOR = keccak256(abi.encode(DOMAIN\\_TYPEHASH, bytes("EigenLayer"), block.chainid, address(this)));\\n```\\n\\nOnce set in the `initialize` function, the value can't be changed anymore. In particular, the chain ID is “baked into” the `DOMAIN_SEPARATOR` during initialization. However, it is not necessarily constant: In the event of a chain split, only one of the resulting chains gets to keep the original chain ID, and the other should use a new one. With the current approach to compute the `DOMAIN_SEPARATOR` during initialization, store it, and then use the stored value for signature verification, a signature will be valid on both chains after a split - but it should not be valid on the chain with the new ID. Hence, the domain separator should be computed dynamically.\\nB. The `name` in the `EIP712Domain` is of type string:\\n```\\nbytes32 public constant DOMAIN\\_TYPEHASH =\\n keccak256("EIP712Domain(string name,uint256 chainId,address verifyingContract)");\\n```\\n\\nWhat's encoded when the domain separator is computed is bytes("EigenLayer"):\\n```\\nDOMAIN\\_SEPARATOR = keccak256(abi.encode(DOMAIN\\_TYPEHASH, bytes("EigenLayer"), block.chainid, address(this)));\\n```\\n\\nAccording to EIP-712,\\nThe dynamic values `bytes` and `string` are encoded as a `keccak256` hash of their contents.\\nHence, `bytes("EigenLayer")` should be replaced with `keccak256(bytes("EigenLayer"))`.\\nC. The `EIP712Domain` does not include a version string:\\n```\\nbytes32 public constant DOMAIN\\_TYPEHASH =\\n keccak256("EIP712Domain(string name,uint256 chainId,address verifyingContract)");\\n```\\n\\nThat is allowed according to the specification. However, given that most, if not all, projects, as well as OpenZeppelin's EIP-712 implementation, do include a version string in their `EIP712Domain`, it might be a pragmatic choice to do the same, perhaps to avoid potential incompatibilities.
Individual recommendations have been given above. Alternatively, you might want to utilize OpenZeppelin's `EIP712Upgradeable` library, which will take care of these issues. Note that some of these changes will break existing signatures.
null
```\\n/// @notice EIP-712 Domain separator\\nbytes32 public DOMAIN\\_SEPARATOR;\\n```\\n
StrategyManagerStorage - Miscalculated Gap Size
low
Upgradeable contracts should have a “gap” of unused storage slots at the end to allow for adding state variables when the contract is upgraded. The convention is to have a gap whose size adds up to 50 with the used slots at the beginning of the contract's storage.\\nIn `StrategyManagerStorage`, the number of consecutively used storage slots is 10:\\n`DOMAIN_SEPARATOR`\\n`nonces`\\n`strategyWhitelister`\\n`withdrawalDelayBlocks`\\n`stakerStrategyShares`\\n`stakerStrategyList`\\n`withdrawalRootPending`\\n`numWithdrawalsQueued`\\n`strategyIsWhitelistedForDeposit`\\n`beaconChainETHSharesToDecrementOnWithdrawal`\\nHowever, the gap size in the storage contract is 41:\\n```\\nuint256[41] private \\_\\_gap;\\n```\\n
If you don't have to maintain compatibility with an existing deployment, we recommend reducing the storage gap size to 40. Otherwise, we recommend adding a comment explaining that, in this particular case, the gap size and the used storage slots should add up to 51 instead of 50 and that this invariant has to be maintained in future versions of this contract.
null
```\\nuint256[41] private \\_\\_gap;\\n```\\n
Funds Refunded From Celer Bridge Might Be Stolen
high
```\\nif (!router.withdraws(transferId)) {\\n router.withdraw(\\_request, \\_sigs, \\_signers, \\_powers);\\n}\\n```\\n\\nFrom the point of view of the Celer bridge, the initial depositor of the tokens is the `SocketGateway`. As a consequence, the Celer contract transfers the tokens to be refunded to the gateway. The gateway is then in charge of forwarding the tokens to the initial depositor. To achieve this, it keeps a mapping of unique transfer IDs to depositor addresses. Once a refund is processed, the corresponding address in the mapping is reset to the zero address.\\nLooking at the `withdraw` function of the Celer pool, we see that for some tokens, it is possible that the reimbursement will not be processed directly, but only after some delay. From the gateway point of view, the reimbursement will be marked as successful, and the address of the original sender corresponding to this transfer ID will be reset to address(0).\\n```\\nif (delayThreshold > 0 && wdmsg.amount > delayThreshold) {\\n _addDelayedTransfer(wdId, wdmsg.receiver, wdmsg.token, wdmsg. // <--- here\\n} else {\\n _sendToken(wdmsg.receiver, wdmsg.token, wdmsg.\\n}\\n```\\n\\nIt is then the responsibility of the user, once the locking delay has passed, to call another function to claim the tokens. Unfortunately, in our case, this means that the funds will be sent back to the gateway contract and not to the original sender. Because the gateway implements `rescueEther`, and `rescueFunds` functions, the admin might be able to send the funds back to the user. However, this requires manual intervention and breaks the trustlessness assumptions of the system. Also, in that case, there is no easy way to trace back the original address of the sender, that corresponds to this refund.\\nHowever, there is an additional issue that might allow an attacker to steal some funds from the gateway. Indeed, when claiming the refund, if it is in ETH, the gateway will have some balance when the transaction completes. Any user can then call any function that consumes the gateway balance, such as the `swapAndBridge` from `CelerImpl`, to steal the refunded ETH. That is possible as the function relies on a user-provided amount as an input, and not on `msg.value`. Additionally, if the refund is an ERC-20, an attacker can steal the funds by calling `bridgeAfterSwap` or `swapAndBridge` from the `Stargate` or `Celer` routes with the right parameters.\\n```\\nfunction bridgeAfterSwap(\\n uint256 amount,\\n bytes calldata bridgeData\\n) external payable override {\\n CelerBridgeData memory celerBridgeData = abi.decode(\\n bridgeData,\\n (CelerBridgeData)\\n );\\n```\\n\\n```\\nfunction swapAndBridge(\\n uint32 swapId,\\n bytes calldata swapData,\\n StargateBridgeDataNoToken calldata stargateBridgeData\\n```\\n\\nNote that this violates the security assumption: “The contracts are not supposed to hold any funds post-tx execution.”
Make sure that `CelerImpl` supports also the delayed withdrawals functionality and that withdrawal requests are deleted only if the receiver has received the withdrawal in a single transaction.
null
```\\nif (!router.withdraws(transferId)) {\\n router.withdraw(\\_request, \\_sigs, \\_signers, \\_powers);\\n}\\n```\\n
Calls Made to Non-Existent/Removed Routes or Controllers Will Not Result in Failure
high
This issue was found in commit hash `a8d0ad1c280a699d88dc280d9648eacaf215fb41`.\\nIn the Ethereum Virtual Machine (EVM), `delegatecall` will succeed for calls to externally owned accounts and more specifically to the zero address, which presents a potential security risk. We have identified multiple instances of `delegatecall` being used to invoke smart contract functions.\\nThis, combined with the fact that routes can be removed from the system by the owner of the `SocketGateway` contract using the `disableRoute` function, makes it possible for the user's funds to be lost in case of an `executeRoute` transaction (for instance) that's waiting in the mempool is eventually being front-ran by a call to `disableRoute`.\\n```\\n(bool success, bytes memory result) = addressAt(routeId).delegatecall(\\n```\\n\\n```\\n.delegatecall(swapData);\\n```\\n\\n```\\n.delegatecall(swapData);\\n```\\n\\n```\\n.delegatecall(swapData);\\n```\\n\\n```\\n.delegatecall(data);\\n```\\n\\nEven after the upgrade to commit hash `d0841a3e96b54a9d837d2dba471aa0946c3c8e7b`, the following bug is still present:\\nTo optimize gas usage, the `addressAt` function in `socketGateway` uses a binary search in a hard-coded table to resolve a `routeID` (routeID <= 512) to a contract address. This is made possible thanks to the factory using the `CREATE2` pattern. This allows to pre-compute future addresses of contracts before they are deployed. In case the `routeID` is strictly greater than 512, `addressAt` falls back to fetching the address from a state mapping (routes).\\nThe new commit hash adds a check to make sure that the call to the `addressAt` function reverts in case a `routeID` is not present in the `routes` mapping. This prevents delegate-calling to non-existent addresses in various places of the code. However, this does not solve the issue for the hard-coded route addresses (i.e., `routeID` <= 512). In that case, the `addressAt` function still returns a valid route contract address, despite the contract not being deployed yet. This will result in a successful `delegatecall` later in the code and might lead to various side-effects.\\n```\\nfunction addressAt(uint32 routeId) public view returns (address) {\\n if (routeId < 513) {\\n if (routeId < 257) {\\n if (routeId < 129) {\\n if (routeId < 65) {\\n if (routeId < 33) {\\n if (routeId < 17) {\\n if (routeId < 9) {\\n if (routeId < 5) {\\n if (routeId < 3) {\\n if (routeId == 1) {\\n return\\n 0x822D4B4e63499a576Ab1cc152B86D1CFFf794F4f;\\n } else {\\n return\\n 0x822D4B4e63499a576Ab1cc152B86D1CFFf794F4f;\\n }\\n } else {\\n```\\n\\n```\\nif (routes[routeId] == address(0)) revert ZeroAddressNotAllowed();\\nreturn routes[routeId];\\n```\\n
Consider adding a check to validate that the callee of a `delegatecall` is indeed a contract, you may refer to the Address library by OZ.
null
```\\n(bool success, bytes memory result) = addressAt(routeId).delegatecall(\\n```\\n
Owner Can Add Arbitrary Code to Be Executed From the SocketGateway Contract
medium
The Socket system is managed by the `SocketGateway` contract that maintains all routes and controller addresses within its state. There, the address with the `Owner` role of the `SocketGateway` contract can add new routes and controllers that would have a `delegatecall()` executed upon them from the `SocketGateway` so user transactions can go through the logic required for the bridge, swap, or any other solution integrated with Socket. These routes and controllers would then have arbitrary code that is entirely up to the `Owner`, though users are not required to go through any specific routes and can decide which routes to pick.\\nSince these routes are called via `delegatecall()`, they don't hold any storage variables that would be used in the Socket systems. However, as Socket aggregates more solutions, unexpected complexities may arise that could require storing and accessing variables through additional contracts. Those contracts would be access control protected to only have the `SocketGateway` contract have the privileges to modify its variables.\\nThis together with the `Owner` of the `SocketGateway` being able to add routes with arbitrary code creates an attack vector where a compromised address with `Owner` privileges may add a route that would contain code that exploits the special privileges assigned to the `SocketGateway` contract for their benefit.\\nFor example, the Celer bridge needs extra logic to account for its refund mechanism, so there is an additional `CelerStorageWrapper` contract that maintains a mapping between individual bridge transfer transactions and their associated msg.sender:\\n```\\ncelerStorageWrapper.setAddressForTransferId(transferId, msg.sender);\\n```\\n\\n```\\n/\\*\\*\\n \\* @title CelerStorageWrapper\\n \\* @notice handle storageMappings used while bridging ERC20 and native on CelerBridge\\n \\* @dev all functions ehich mutate the storage are restricted to Owner of SocketGateway\\n \\* @author Socket dot tech.\\n \\*/\\ncontract CelerStorageWrapper {\\n```\\n\\nConsequently, this contract has access-protected functions that may only be called by the SocketGateway to set and delete the transfer IDs:\\n```\\nfunction setAddressForTransferId(\\n```\\n\\n```\\nfunction deleteTransferId(bytes32 transferId) external {\\n```\\n\\nA compromised `Owner` of SocketGateway could then create a route that calls into the `CelerStorageWrapper` contract and updates the transfer IDs associated addresses to be under their control via `deleteTransferId()` and `setAddressForTransferId()` functions. This could create a significant drain of user funds, though, it depends on a compromised privileged `Owner` address.
Although it may indeed be unlikely, for aggregating solutions it is especially important to try and minimize compromised access issues. As future solutions require more complexity, consider architecting their integrations in such a way that they require as few administrative and SocketGateway-initiated transactions as possible. Through conversations with the Socket team, it appears that solutions such as timelocks on adding new routes are being considered as well, which would help catch the problem before it appears as well.
null
```\\ncelerStorageWrapper.setAddressForTransferId(transferId, msg.sender);\\n```\\n
Dependency on Third-Party APIs to Create the Right Payload
medium
The Socket system of routes and controllers integrates swaps, bridges, and potentially other solutions that are vastly different from each other. The function arguments that are required to execute them may often seem like a black box of a payload for a typical end user. In fact, even when users explicitly provide a destination `token` with an associated `amount` for a swap, these arguments themselves might not even be fully (or at all) used in the route itself. Instead, often the routes and controllers accept a `bytes` payload that contains all the necessary data for its action. These data payloads are generated off-chain, often via centralized APIs provided by the integrated systems themselves, which is understandable in isolation as they have to be generated somewhere at some point. However, the provided `bytes` do not get checked for their correctness or matching with the other arguments that the user explicitly provided. Even the events that get emitted refer to the individual arguments of functions as opposed to what actually was being used to execute the logic.\\nFor example, the implementation route for the 1inch swaps explicitly asks the user to provide `fromToken`, `toToken`, `amount`, and `receiverAddress`, however only `fromToken` and `amount` are used meaningfully to transfer the `amount` to the SocketGateway and approve the `fromToken` to be spent by the 1inch contract. Everything else is dictated by `swapExtraData`, including even the true `amount` that is getting swapped. A mishap in the API providing this data payload could cause much less of a token `amount` to be swapped, a wrong address to receive the swap, and even the wrong destination token to return.\\n```\\n// additional data is generated in off-chain using the OneInch API which takes in\\n// fromTokenAddress, toTokenAddress, amount, fromAddress, slippage, destReceiver, disableEstimate\\n(bool success, bytes memory result) = ONEINCH\\_AGGREGATOR.call(\\n swapExtraData\\n);\\n```\\n\\nEven the event at the end of the transaction partially refers to the explicitly provided arguments instead of those that actually facilitated the execution of logic\\n```\\nemit SocketSwapTokens(\\n fromToken,\\n toToken,\\n returnAmount,\\n amount,\\n OneInchIdentifier,\\n receiverAddress\\n);\\n```\\n\\nAs Socket aggregates other solutions, it naturally incurs the trust assumptions and risks associated with its integrations. In some ways, they even stack on top of each other, especially in those Socket functions that batch several routes together - all of them and their associated API calls need to return the correct payloads. So, there is an opportunity to minimize these risks by introducing additional checks into the contracts that would verify the correctness of the payloads that are passed over to the routes and controllers. In fact, creating these payloads within the contracts would allow other systems to integrate Socket more simpler as they could just call the functions with primary logical arguments such as the source token, destination token, and amount.
Consider allocating additional checks within the route implementations that ensure that the explicitly passed arguments match what is being sent for execution to the integrated solutions, like in the above example with the 1inch implementation.
null
```\\n// additional data is generated in off-chain using the OneInch API which takes in\\n// fromTokenAddress, toTokenAddress, amount, fromAddress, slippage, destReceiver, disableEstimate\\n(bool success, bytes memory result) = ONEINCH\\_AGGREGATOR.call(\\n swapExtraData\\n);\\n```\\n
NativeOptimismImpl - Events Will Not Be Emitted in Case of Non-Native Tokens Bridging
medium
In the case of the usage of non-native tokens by users, the `SocketBridge` event will not be emitted since the code will return early.\\n```\\nfunction bridgeAfterSwap(\\n```\\n\\n```\\nfunction swapAndBridge(\\n```\\n\\n```\\nfunction bridgeERC20To(\\n```\\n
Make sure that the `SocketBridge` event is emitted for non-native tokens as well.
null
```\\nfunction bridgeAfterSwap(\\n```\\n
Inconsistent Comments
low
Some of the contracts in the code have incorrect developer comments annotated for them. This could create confusion for future readers of this code that may be trying to maintain, audit, update, fork, integrate it, and so on.\\n```\\n/\\*\\*\\n \\* @notice function to bridge tokens after swap. This is used after swap function call\\n \\* @notice This method is payable because the caller is doing token transfer and briding operation\\n \\* @dev for usage, refer to controller implementations\\n \\* encodedData for bridge should follow the sequence of properties in Stargate-BridgeData struct\\n \\* @param swapId routeId for the swapImpl\\n \\* @param swapData encoded data for swap\\n \\* @param stargateBridgeData encoded data for StargateBridgeData\\n \\*/\\nfunction swapAndBridge(\\n```\\n\\nThis is the same comment as `bridgeAfterSwap`, whereas it instead does swapping and bridging together\\n```\\n/\\*\\*\\n \\* @notice function to store the transferId and message-sender of a bridging activity\\n \\* @notice This method is payable because the caller is doing token transfer and briding operation\\n \\* @dev for usage, refer to controller implementations\\n \\* encodedData for bridge should follow the sequence of properties in CelerBridgeData struct\\n \\* @param transferId transferId generated during the bridging of ERC20 or native on CelerBridge\\n \\* @param transferIdAddress message sender who is making the bridging on CelerBridge\\n \\*/\\nfunction setAddressForTransferId(\\n```\\n\\nThis comment refers to a payable property of this function when it isn't.\\n```\\n/\\*\\*\\n \\* @notice function to store the transferId and message-sender of a bridging activity\\n \\* @notice This method is payable because the caller is doing token transfer and briding operation\\n \\* @dev for usage, refer to controller implementations\\n \\* encodedData for bridge should follow the sequence of properties in CelerBridgeData struct\\n \\* @param transferId transferId generated during the bridging of ERC20 or native on CelerBridge\\n \\*/\\nfunction deleteTransferId(bytes32 transferId) external {\\n```\\n\\nThis comment is copied from the above function when it does the opposite of storing - it deletes the `transferId`
Adjust comments so they reflect what the functions are actually doing.
null
```\\n/\\*\\*\\n \\* @notice function to bridge tokens after swap. This is used after swap function call\\n \\* @notice This method is payable because the caller is doing token transfer and briding operation\\n \\* @dev for usage, refer to controller implementations\\n \\* encodedData for bridge should follow the sequence of properties in Stargate-BridgeData struct\\n \\* @param swapId routeId for the swapImpl\\n \\* @param swapData encoded data for swap\\n \\* @param stargateBridgeData encoded data for StargateBridgeData\\n \\*/\\nfunction swapAndBridge(\\n```\\n
Unused Error Codes.
low
`error RouteAlreadyExist();`\\n`error ContractContainsNoCode();`\\n`error ControllerAlreadyExist();`\\n`error ControllerAddressIsZero();`\\nIt seems that they were created as errors that may have been expected to occur during the early stages of development, but the resulting architecture doesn't seem to have a place for them currently.\\n```\\nerror RouteAlreadyExist();\\nerror SwapFailed();\\nerror UnsupportedInterfaceId();\\nerror ContractContainsNoCode();\\nerror InvalidCelerRefund();\\nerror CelerAlreadyRefunded();\\nerror ControllerAlreadyExist();\\nerror ControllerAddressIsZero();\\n```\\n
Resolution\\nRemediated as per the client team in SocketDotTech/socket-ll-contracts#148.\\nConsider revisiting these errors and identifying whether they need to remain or can be removed.
null
```\\nerror RouteAlreadyExist();\\nerror SwapFailed();\\nerror UnsupportedInterfaceId();\\nerror ContractContainsNoCode();\\nerror InvalidCelerRefund();\\nerror CelerAlreadyRefunded();\\nerror ControllerAlreadyExist();\\nerror ControllerAddressIsZero();\\n```\\n