name
stringlengths
5
231
severity
stringclasses
3 values
description
stringlengths
107
68.2k
recommendation
stringlengths
12
8.75k
impact
stringlengths
3
11.2k
function
stringlengths
15
64.6k
Inaccurate Interface.
low
`ISocketGateway` implies a `bridge(uint32 routeId, bytes memory data)` function, but there is no socket contract with a function like that, including the `SocketGateway` contract.\\n```\\nfunction bridge(\\n uint32 routeId,\\n bytes memory data\\n) external payable returns (bytes memory);\\n```\\n
Adjust the interface.
null
```\\nfunction bridge(\\n uint32 routeId,\\n bytes memory data\\n) external payable returns (bytes memory);\\n```\\n
Validate Array Length Matching Before Execution to Avoid Reverts
low
The Socket system not only aggregates different solutions via its routes and controllers but also allows to batch calls between them into one transaction. For example, a user may call swaps between several DEXs and then perform a bridge transfer.\\nAs a result, the `SocketGateway` contract has many functions that accept multiple arrays that contain the necessary data for execution in their respective routes. However, these arrays need to be of the same length because individual elements in the arrays are intended to be matched at the same indices:\\n```\\nfunction executeRoutes(\\n uint32[] calldata routeIds,\\n bytes[] calldata dataItems,\\n bytes[] calldata eventDataItems\\n) external payable {\\n uint256 routeIdslength = routeIds.length;\\n for (uint256 index = 0; index < routeIdslength; ) {\\n (bool success, bytes memory result) = addressAt(routeIds[index])\\n .delegatecall(dataItems[index]);\\n\\n if (!success) {\\n assembly {\\n revert(add(result, 32), mload(result))\\n }\\n }\\n\\n emit SocketRouteExecuted(routeIds[index], eventDataItems[index]);\\n\\n unchecked {\\n ++index;\\n }\\n }\\n}\\n```\\n\\nNote that in the above example function, all 3 different calldata arrays `routeIds`, `dataItems`, and `eventDataItems` were utilizing the same `index` to retrieve the correct element. A common practice in such cases is to confirm that the sizes of the arrays match before continuing with the execution of the rest of the transaction to avoid costly reverts that could happen due to “Index out of bounds” error.\\nDue to the aggregating and batching nature of the Socket system that may have its users rely on 3rd party offchain APIs to construct these array payloads, such as from APIs of the systems that Socket is integrating, a mishap in just any one of them could cause this issue.
Implement a check on the array lengths so they match.
null
```\\nfunction executeRoutes(\\n uint32[] calldata routeIds,\\n bytes[] calldata dataItems,\\n bytes[] calldata eventDataItems\\n) external payable {\\n uint256 routeIdslength = routeIds.length;\\n for (uint256 index = 0; index < routeIdslength; ) {\\n (bool success, bytes memory result) = addressAt(routeIds[index])\\n .delegatecall(dataItems[index]);\\n\\n if (!success) {\\n assembly {\\n revert(add(result, 32), mload(result))\\n }\\n }\\n\\n emit SocketRouteExecuted(routeIds[index], eventDataItems[index]);\\n\\n unchecked {\\n ++index;\\n }\\n }\\n}\\n```\\n
Destroyed Routes Eth Balances Will Be Left Locked in SocketDeployFactory
low
`SocketDeployFactory.destroy` calls the `killme` function which in turn self-destructs the route and sends back any eth to the factory contract. However, these funds can not be claimed from the `SocketDeployFactory` contract.\\n```\\nfunction destroy(uint256 routeId) external onlyDisabler {\\n```\\n
Make sure that these funds can be claimed.
null
```\\nfunction destroy(uint256 routeId) external onlyDisabler {\\n```\\n
RocketNodeDistributorDelegate - Reentrancy in distribute() allows node owner to drain distributor funds
high
The `distribute()` function distributes the contract's balance between the node operator and the user. The node operator is returned their initial collateral, including a fee. The rest is returned to the RETH token contract as user collateral.\\nAfter determining the node owner's share, the contract transfers `ETH` to the node withdrawal address, which can be the configured withdrawal address or the node address. Both addresses may potentially be a malicious contract that recursively calls back into the `distribute()` function to retrieve the node share multiple times until all funds are drained from the contract. The `distribute()` function is not protected against reentrancy:\\n```\\n/// @notice Distributes the balance of this contract to its owners\\nfunction distribute() override external {\\n // Calculate node share\\n uint256 nodeShare = getNodeShare();\\n // Transfer node share\\n address withdrawalAddress = rocketStorage.getNodeWithdrawalAddress(nodeAddress);\\n (bool success,) = withdrawalAddress.call{value : nodeShare}("");\\n require(success);\\n // Transfer user share\\n uint256 userShare = address(this).balance;\\n address rocketTokenRETH = rocketStorage.getAddress(rocketTokenRETHKey);\\n payable(rocketTokenRETH).transfer(userShare);\\n // Emit event\\n emit FeesDistributed(nodeAddress, userShare, nodeShare, block.timestamp);\\n}\\n```\\n\\nWe also noticed that any address could set a withdrawal address as there is no check for the caller to be a registered node. In fact, the caller can be the withdrawal address or node operator.\\n```\\n// Set a node's withdrawal address\\nfunction setWithdrawalAddress(address \\_nodeAddress, address \\_newWithdrawalAddress, bool \\_confirm) external override {\\n // Check new withdrawal address\\n require(\\_newWithdrawalAddress != address(0x0), "Invalid withdrawal address");\\n // Confirm the transaction is from the node's current withdrawal address\\n address withdrawalAddress = getNodeWithdrawalAddress(\\_nodeAddress);\\n require(withdrawalAddress == msg.sender, "Only a tx from a node's withdrawal address can update it");\\n // Update immediately if confirmed\\n if (\\_confirm) {\\n updateWithdrawalAddress(\\_nodeAddress, \\_newWithdrawalAddress);\\n }\\n // Set pending withdrawal address if not confirmed\\n else {\\n pendingWithdrawalAddresses[\\_nodeAddress] = \\_newWithdrawalAddress;\\n }\\n}\\n```\\n
Resolution\\nFixed in https://github.com/rocket-pool/rocketpool/tree/77d7cca65b7c0557cfda078a4fc45f9ac0cc6cc6 by implementing a custom reentrancy guard via a new state variable `lock` that is appended to the end of the storage layout. The reentrancy guard is functionally equivalent to the OpenZeppelin implementation. The method was not refactored to give user funds priority over the node share. Additionally, the client provided the following statement:\\nWe acknowledge this as a critical issue and have solved with a reentrancy guard.\\nWe followed OpenZeppelin's design for a reentrancy guard. We were unable to use it directly as it is hardcoded to use storage slot 0 and because we already have deployment of this delegate in the wild already using storage slot 0 for another purpose, we had to append it to the end of the existing storage layout.\\nAdd a reentrancy guard to functions that interact with untrusted contracts. Adhere to the checks-effects pattern and send user funds to the ‘trusted' RETH contract first. Only then send funds to the node's withdrawal address.
null
```\\n/// @notice Distributes the balance of this contract to its owners\\nfunction distribute() override external {\\n // Calculate node share\\n uint256 nodeShare = getNodeShare();\\n // Transfer node share\\n address withdrawalAddress = rocketStorage.getNodeWithdrawalAddress(nodeAddress);\\n (bool success,) = withdrawalAddress.call{value : nodeShare}("");\\n require(success);\\n // Transfer user share\\n uint256 userShare = address(this).balance;\\n address rocketTokenRETH = rocketStorage.getAddress(rocketTokenRETHKey);\\n payable(rocketTokenRETH).transfer(userShare);\\n // Emit event\\n emit FeesDistributed(nodeAddress, userShare, nodeShare, block.timestamp);\\n}\\n```\\n
RocketMinipoolDelegateOld - Node operator may reenter finalise() to manipulate accounting
high
In the old Minipool delegate contract, a node operator may call the `finalise()` function to finalize a Minipool. As part of this process, a call to `_refund()` may be performed if there is a node refund balance to be transferred. This will send an amount of `nodeRefundBalance` in ETH to the `nodeWithdrawalAddress` via a low-level call, handing over control flow to an - in terms of the system - untrusted external account that this node operator controls. The node operator, therefore, is granted to opportunity to call back into `finalise()`, which is not protected against reentrancy and violates the checks-effects-interactions pattern (finalised = true is only set at the very end), to manipulate the following system settings:\\nnode.minipools.finalised.count<NodeAddress>: NodeAddress finalised count increased twice instead\\nminipools.finalised.count: global finalised count increased twice\\n`eth.matched.node.amount<NodeAddress>` - NodeAddress eth matched amount potentially reduced too many times; has an impact on `getNodeETHCollateralisationRatio -> GetNodeShare`, `getNodeETHProvided -> getNodeEffectiveRPLStake` and `getNodeETHProvided->getNodeMaximumRPLStake->withdrawRPL` and is the limiting factor when withdrawing RPL to ensure the pools stay collateralized.\\nNote: `RocketMinipoolDelegateOld` is assumed to be the currently deployed MiniPool implementation. Users may upgrade from this delegate to the new version and can roll back at any time and re-upgrade, even within the same transaction (see issue 5.3 ).\\nThe following is an annotated call stack from a node operator calling `minipool.finalise()` reentering `finalise()` once more on their Minipool:\\n```\\nfinalise() --> \\n status == MinipoolStatus.Withdrawable //<-- true\\n withdrawalBlock > 0 //<-- true\\n _finalise() -->\\n !finalised //<-- true\\n _refund()\\n nodeRefundBalance = 0 //<-- reset refund balance\\n ---> extCall: nodeWithdrawalAddress\\n ---> reenter: finalise()\\n status == MinipoolStatus.Withdrawable //<-- true\\n withdrawalBlock > 0 //<-- true\\n _finalise() -->\\n !finalised //<-- true\\n nodeRefundBalance > 0 //<-- false; no refund()\\n address(this).balance to RETH\\n RocketTokenRETHInterface(rocketTokenRETH).depositExcessCollateral()\\n rocketMinipoolManager.incrementNodeFinalisedMinipoolCount(nodeAddress) //<-- 1st time\\n eventually call rocketDAONodeTrusted.decrementMemberUnbondedValidatorCount(nodeAddress); \\n finalised = true;\\n <--- return from reentrant call\\n <--- return from _refund()\\n address(this).balance to RETH //<-- NOP as balance was sent to RETH already\\n RocketTokenRETHInterface(rocketTokenRETH).depositExcessCollateral(); //<-- does not revert\\n rocketMinipoolManager.incrementNodeFinalisedMinipoolCount(nodeAddress); //<-- no revert, increases\\n 'node.minipools.finalised.count', 'minipools.finalised.count', reduces 'eth.matched.node.amount' one to\\n many times\\n eventually call rocketDAONodeTrusted.decrementMemberUnbondedValidatorCount(nodeAddress); //<-- manipulates\\n 'member.validator.unbonded.count' by +1\\n finalised = true; //<-- is already 'true', gracefully continues\\n<--- returns \\n```\\n\\n```\\n// Called by node operator to finalise the pool and unlock their RPL stake\\nfunction finalise() external override onlyInitialised onlyMinipoolOwnerOrWithdrawalAddress(msg.sender) {\\n // Can only call if withdrawable and can only be called once\\n require(status == MinipoolStatus.Withdrawable, "Minipool must be withdrawable");\\n // Node operator cannot finalise the pool unless distributeBalance has been called\\n require(withdrawalBlock > 0, "Minipool balance must have been distributed at least once");\\n // Finalise the pool\\n \\_finalise();\\n}\\n```\\n\\n`_refund()` handing over control flow to `nodeWithdrawalAddress`\\n```\\n// Perform any slashings, refunds, and unlock NO's stake\\nfunction \\_finalise() private {\\n // Get contracts\\n RocketMinipoolManagerInterface rocketMinipoolManager = RocketMinipoolManagerInterface(getContractAddress("rocketMinipoolManager"));\\n // Can only finalise the pool once\\n require(!finalised, "Minipool has already been finalised");\\n // If slash is required then perform it\\n if (nodeSlashBalance > 0) {\\n \\_slash();\\n }\\n // Refund node operator if required\\n if (nodeRefundBalance > 0) {\\n \\_refund();\\n }\\n // Send any left over ETH to rETH contract\\n if (address(this).balance > 0) {\\n // Send user amount to rETH contract\\n payable(rocketTokenRETH).transfer(address(this).balance);\\n }\\n // Trigger a deposit of excess collateral from rETH contract to deposit pool\\n RocketTokenRETHInterface(rocketTokenRETH).depositExcessCollateral();\\n // Unlock node operator's RPL\\n rocketMinipoolManager.incrementNodeFinalisedMinipoolCount(nodeAddress);\\n // Update unbonded validator count if minipool is unbonded\\n if (depositType == MinipoolDeposit.Empty) {\\n RocketDAONodeTrustedInterface rocketDAONodeTrusted = RocketDAONodeTrustedInterface(getContractAddress("rocketDAONodeTrusted"));\\n rocketDAONodeTrusted.decrementMemberUnbondedValidatorCount(nodeAddress);\\n }\\n // Set finalised flag\\n finalised = true;\\n}\\n```\\n\\n```\\nfunction \\_refund() private {\\n // Update refund balance\\n uint256 refundAmount = nodeRefundBalance;\\n nodeRefundBalance = 0;\\n // Get node withdrawal address\\n address nodeWithdrawalAddress = rocketStorage.getNodeWithdrawalAddress(nodeAddress);\\n // Transfer refund amount\\n (bool success,) = nodeWithdrawalAddress.call{value : refundAmount}("");\\n require(success, "ETH refund amount was not successfully transferred to node operator");\\n // Emit ether withdrawn event\\n emit EtherWithdrawn(nodeWithdrawalAddress, refundAmount, block.timestamp);\\n}\\n```\\n\\nMethods adjusting system settings called twice:\\n```\\n// Increments \\_nodeAddress' number of minipools that have been finalised\\nfunction incrementNodeFinalisedMinipoolCount(address \\_nodeAddress) override external onlyLatestContract("rocketMinipoolManager", address(this)) onlyRegisteredMinipool(msg.sender) {\\n // Update the node specific count\\n addUint(keccak256(abi.encodePacked("node.minipools.finalised.count", \\_nodeAddress)), 1);\\n // Update the total count\\n addUint(keccak256(bytes("minipools.finalised.count")), 1);\\n}\\n```\\n\\n```\\n}\\nfunction decrementMemberUnbondedValidatorCount(address \\_nodeAddress) override external onlyLatestContract("rocketDAONodeTrusted", address(this)) onlyRegisteredMinipool(msg.sender) {\\n subUint(keccak256(abi.encodePacked(daoNameSpace, "member.validator.unbonded.count", \\_nodeAddress)), 1);\\n}\\n```\\n
We recommend setting the `finalised = true` flag immediately after checking for it. Additionally, the function flow should adhere to the checks-effects-interactions pattern whenever possible. We recommend adding generic reentrancy protection whenever the control flow is handed to an untrusted entity.
null
```\\nfinalise() --> \\n status == MinipoolStatus.Withdrawable //<-- true\\n withdrawalBlock > 0 //<-- true\\n _finalise() -->\\n !finalised //<-- true\\n _refund()\\n nodeRefundBalance = 0 //<-- reset refund balance\\n ---> extCall: nodeWithdrawalAddress\\n ---> reenter: finalise()\\n status == MinipoolStatus.Withdrawable //<-- true\\n withdrawalBlock > 0 //<-- true\\n _finalise() -->\\n !finalised //<-- true\\n nodeRefundBalance > 0 //<-- false; no refund()\\n address(this).balance to RETH\\n RocketTokenRETHInterface(rocketTokenRETH).depositExcessCollateral()\\n rocketMinipoolManager.incrementNodeFinalisedMinipoolCount(nodeAddress) //<-- 1st time\\n eventually call rocketDAONodeTrusted.decrementMemberUnbondedValidatorCount(nodeAddress); \\n finalised = true;\\n <--- return from reentrant call\\n <--- return from _refund()\\n address(this).balance to RETH //<-- NOP as balance was sent to RETH already\\n RocketTokenRETHInterface(rocketTokenRETH).depositExcessCollateral(); //<-- does not revert\\n rocketMinipoolManager.incrementNodeFinalisedMinipoolCount(nodeAddress); //<-- no revert, increases\\n 'node.minipools.finalised.count', 'minipools.finalised.count', reduces 'eth.matched.node.amount' one to\\n many times\\n eventually call rocketDAONodeTrusted.decrementMemberUnbondedValidatorCount(nodeAddress); //<-- manipulates\\n 'member.validator.unbonded.count' by +1\\n finalised = true; //<-- is already 'true', gracefully continues\\n<--- returns \\n```\\n
RocketMinipoolDelegate - Sandwiching of Minipool calls can have unintended side effects
high
The `RocketMinipoolBase` contract exposes the functions `delegateUpgrade` and `delegateRollback`, allowing the minipool owner to switch between delegate implementations. While giving the minipool owner a chance to roll back potentially malfunctioning upgrades, the fact that upgrades and rollback are instantaneous also gives them a chance to alternate between executing old and new code (e.g. by utilizing callbacks) and sandwich user calls to the minipool.\\nAssuming the latest minipool delegate implementation, any user can call `RocketMinipoolDelegate.slash`, which slashes the node operator's RPL balance if a slashing has been recorded on their validator. To mark the minipool as having been `slashed`, the `slashed` contract variable is set to `true`. A minipool owner can avoid this flag from being set By sandwiching the user calls:\\nIn detail, the new slash implementation:\\n```\\nfunction \\_slash() private {\\n // Get contracts\\n RocketNodeStakingInterface rocketNodeStaking = RocketNodeStakingInterface(getContractAddress("rocketNodeStaking"));\\n // Slash required amount and reset storage value\\n uint256 slashAmount = nodeSlashBalance;\\n nodeSlashBalance = 0;\\n rocketNodeStaking.slashRPL(nodeAddress, slashAmount);\\n // Record slashing\\n slashed = true;\\n}\\n```\\n\\nCompared to the old slash implementation:\\n```\\nfunction \\_slash() private {\\n // Get contracts\\n RocketNodeStakingInterface rocketNodeStaking = RocketNodeStakingInterface(getContractAddress("rocketNodeStaking"));\\n // Slash required amount and reset storage value\\n uint256 slashAmount = nodeSlashBalance;\\n nodeSlashBalance = 0;\\n rocketNodeStaking.slashRPL(nodeAddress, slashAmount);\\n}\\n```\\n\\nWhile the bypass of `slashed` being set is a benign example, the effects of this issue, in general, could result in a significant disruption of minipool operations and potentially affect the system's funds. The impact highly depends on the changes introduced by future minipool upgrades.
We recommend limiting upgrades and rollbacks to prevent minipool owners from switching implementations with an immediate effect. A time lock can fulfill this purpose when a minipool owner announces an upgrade to be done at a specific block. A warning can precede user-made calls that an upgrade is pending, and their interaction can have unintended side effects.
null
```\\nfunction \\_slash() private {\\n // Get contracts\\n RocketNodeStakingInterface rocketNodeStaking = RocketNodeStakingInterface(getContractAddress("rocketNodeStaking"));\\n // Slash required amount and reset storage value\\n uint256 slashAmount = nodeSlashBalance;\\n nodeSlashBalance = 0;\\n rocketNodeStaking.slashRPL(nodeAddress, slashAmount);\\n // Record slashing\\n slashed = true;\\n}\\n```\\n
RocketDAONodeTrustedActions - No way to access ETH provided by non-member votes Acknowledged
high
DAO members can challenge nodes to prove liveliness for free. Non-DAO members must provide `members.challenge.cost = 1 eth` to start a challenge. However, the provided challenge cost is locked within the contract instead of being returned or recycled as system collateral.\\n```\\n// In the event that the majority/all of members go offline permanently and no more proposals could be passed, a current member or a regular node can 'challenge' a DAO members node to respond\\n// If it does not respond in the given window, it can be removed as a member. The one who removes the member after the challenge isn't met, must be another node other than the proposer to provide some oversight\\n// This should only be used in an emergency situation to recover the DAO. Members that need removing when consensus is still viable, should be done via the 'kick' method.\\nfunction actionChallengeMake(address \\_nodeAddress) override external onlyTrustedNode(\\_nodeAddress) onlyRegisteredNode(msg.sender) onlyLatestContract("rocketDAONodeTrustedActions", address(this)) payable {\\n // Load contracts\\n RocketDAONodeTrustedInterface rocketDAONode = RocketDAONodeTrustedInterface(getContractAddress("rocketDAONodeTrusted"));\\n RocketDAONodeTrustedSettingsMembersInterface rocketDAONodeTrustedSettingsMembers = RocketDAONodeTrustedSettingsMembersInterface(getContractAddress("rocketDAONodeTrustedSettingsMembers"));\\n // Members can challenge other members for free, but for a regular bonded node to challenge a DAO member, requires non-refundable payment to prevent spamming\\n if(rocketDAONode.getMemberIsValid(msg.sender) != true) require(msg.value == rocketDAONodeTrustedSettingsMembers.getChallengeCost(), "Non DAO members must pay ETH to challenge a members node");\\n // Can't challenge yourself duh\\n require(msg.sender != \\_nodeAddress, "You cannot challenge yourself");\\n // Is this member already being challenged?\\n```\\n
We recommend locking the ETH inside the contract during the challenge process. If a challenge is refuted, we recommend feeding the locked value back into the system as protocol collateral. If the challenge succeeds and the node is kicked, it is assumed that the challenger will be repaid the amount they had to lock up to prove non-liveliness.
null
```\\n// In the event that the majority/all of members go offline permanently and no more proposals could be passed, a current member or a regular node can 'challenge' a DAO members node to respond\\n// If it does not respond in the given window, it can be removed as a member. The one who removes the member after the challenge isn't met, must be another node other than the proposer to provide some oversight\\n// This should only be used in an emergency situation to recover the DAO. Members that need removing when consensus is still viable, should be done via the 'kick' method.\\nfunction actionChallengeMake(address \\_nodeAddress) override external onlyTrustedNode(\\_nodeAddress) onlyRegisteredNode(msg.sender) onlyLatestContract("rocketDAONodeTrustedActions", address(this)) payable {\\n // Load contracts\\n RocketDAONodeTrustedInterface rocketDAONode = RocketDAONodeTrustedInterface(getContractAddress("rocketDAONodeTrusted"));\\n RocketDAONodeTrustedSettingsMembersInterface rocketDAONodeTrustedSettingsMembers = RocketDAONodeTrustedSettingsMembersInterface(getContractAddress("rocketDAONodeTrustedSettingsMembers"));\\n // Members can challenge other members for free, but for a regular bonded node to challenge a DAO member, requires non-refundable payment to prevent spamming\\n if(rocketDAONode.getMemberIsValid(msg.sender) != true) require(msg.value == rocketDAONodeTrustedSettingsMembers.getChallengeCost(), "Non DAO members must pay ETH to challenge a members node");\\n // Can't challenge yourself duh\\n require(msg.sender != \\_nodeAddress, "You cannot challenge yourself");\\n // Is this member already being challenged?\\n```\\n
Multiple checks-effects violations
high
Throughout the system, there are various violations of the checks-effects-interactions pattern where the contract state is updated after an external call. Since large parts of the Rocket Pool system's smart contracts are not guarded against reentrancy, the external call's recipient may reenter and potentially perform malicious actions that can impact the overall accounting and, thus, system funds.\\n`distributeToOwner()` sends the contract's balance to the node or the withdrawal address before clearing the internal accounting:\\n```\\n/// @notice Withdraw node balances from the minipool and close it. Only accepts calls from the owner\\nfunction close() override external onlyMinipoolOwner(msg.sender) onlyInitialised {\\n // Check current status\\n require(status == MinipoolStatus.Dissolved, "The minipool can only be closed while dissolved");\\n // Distribute funds to owner\\n distributeToOwner();\\n // Destroy minipool\\n RocketMinipoolManagerInterface rocketMinipoolManager = RocketMinipoolManagerInterface(getContractAddress("rocketMinipoolManager"));\\n require(rocketMinipoolManager.getMinipoolExists(address(this)), "Minipool already closed");\\n rocketMinipoolManager.destroyMinipool();\\n // Clear state\\n nodeDepositBalance = 0;\\n nodeRefundBalance = 0;\\n userDepositBalance = 0;\\n userDepositBalanceLegacy = 0;\\n userDepositAssignedTime = 0;\\n}\\n```\\n\\nThe withdrawal block should be set before any other contracts are called:\\n```\\n// Save block to prevent multiple withdrawals within a few blocks\\nwithdrawalBlock = block.number;\\n```\\n\\nThe `slashed` state should be set before any external calls are made:\\n```\\n/// @dev Slash node operator's RPL balance based on nodeSlashBalance\\nfunction \\_slash() private {\\n // Get contracts\\n RocketNodeStakingInterface rocketNodeStaking = RocketNodeStakingInterface(getContractAddress("rocketNodeStaking"));\\n // Slash required amount and reset storage value\\n uint256 slashAmount = nodeSlashBalance;\\n nodeSlashBalance = 0;\\n rocketNodeStaking.slashRPL(nodeAddress, slashAmount);\\n // Record slashing\\n slashed = true;\\n}\\n```\\n\\nIn the bond reducer, the accounting values should be cleared before any external calls are made:\\n```\\n// Get desired to amount\\nuint256 newBondAmount = getUint(keccak256(abi.encodePacked("minipool.bond.reduction.value", msg.sender)));\\nrequire(rocketNodeDeposit.isValidDepositAmount(newBondAmount), "Invalid bond amount");\\n// Calculate difference\\nuint256 existingBondAmount = minipool.getNodeDepositBalance();\\nuint256 delta = existingBondAmount.sub(newBondAmount);\\n// Get node address\\naddress nodeAddress = minipool.getNodeAddress();\\n// Increase ETH matched or revert if exceeds limit based on current RPL stake\\nrocketNodeDeposit.increaseEthMatched(nodeAddress, delta);\\n// Increase node operator's deposit credit\\nrocketNodeDeposit.increaseDepositCreditBalance(nodeAddress, delta);\\n// Clean up state\\ndeleteUint(keccak256(abi.encodePacked("minipool.bond.reduction.time", msg.sender)));\\ndeleteUint(keccak256(abi.encodePacked("minipool.bond.reduction.value", msg.sender)));\\n```\\n\\nThe counter for reward snapshot execution should be incremented before RPL gets minted:\\n```\\n// Execute inflation if required\\nrplContract.inflationMintTokens();\\n// Increment the reward index and update the claim interval timestamp\\nincrementRewardIndex();\\n```\\n
We recommend following the checks-effects-interactions pattern and adjusting any contract state variables before making external calls. With the upgradeable nature of the system, we also recommend strictly adhering to this practice when all external calls are being made to trusted network contracts.
null
```\\n/// @notice Withdraw node balances from the minipool and close it. Only accepts calls from the owner\\nfunction close() override external onlyMinipoolOwner(msg.sender) onlyInitialised {\\n // Check current status\\n require(status == MinipoolStatus.Dissolved, "The minipool can only be closed while dissolved");\\n // Distribute funds to owner\\n distributeToOwner();\\n // Destroy minipool\\n RocketMinipoolManagerInterface rocketMinipoolManager = RocketMinipoolManagerInterface(getContractAddress("rocketMinipoolManager"));\\n require(rocketMinipoolManager.getMinipoolExists(address(this)), "Minipool already closed");\\n rocketMinipoolManager.destroyMinipool();\\n // Clear state\\n nodeDepositBalance = 0;\\n nodeRefundBalance = 0;\\n userDepositBalance = 0;\\n userDepositBalanceLegacy = 0;\\n userDepositAssignedTime = 0;\\n}\\n```\\n
RocketMinipoolDelegate - Redundant refund() call on forced finalization
medium
The `RocketMinipoolDelegate.refund` function will force finalization if a user previously distributed the pool. However, `_finalise` already calls `_refund()` if there is a node refund balance to transfer, making the additional call to `_refund()` in `refund()` obsolete.\\n```\\nfunction refund() override external onlyMinipoolOwnerOrWithdrawalAddress(msg.sender) onlyInitialised {\\n // Check refund balance\\n require(nodeRefundBalance > 0, "No amount of the node deposit is available for refund");\\n // If this minipool was distributed by a user, force finalisation on the node operator\\n if (!finalised && userDistributed) {\\n \\_finalise();\\n }\\n // Refund node\\n \\_refund();\\n}\\n```\\n\\n```\\nfunction \\_finalise() private {\\n // Get contracts\\n RocketMinipoolManagerInterface rocketMinipoolManager = RocketMinipoolManagerInterface(getContractAddress("rocketMinipoolManager"));\\n // Can only finalise the pool once\\n require(!finalised, "Minipool has already been finalised");\\n // Set finalised flag\\n finalised = true;\\n // If slash is required then perform it\\n if (nodeSlashBalance > 0) {\\n \\_slash();\\n }\\n // Refund node operator if required\\n if (nodeRefundBalance > 0) {\\n \\_refund();\\n }\\n```\\n
Resolution\\nFixed in https://github.com/rocket-pool/rocketpool/tree/77d7cca65b7c0557cfda078a4fc45f9ac0cc6cc6 by refactoring `refund()` to avoid a double invocation of `_refund()` in the `_finalise()` codepath.\\nFixed per the recommendation. Thanks.\\nWe recommend refactoring the if condition to contain `_refund()` in the else branch.
null
```\\nfunction refund() override external onlyMinipoolOwnerOrWithdrawalAddress(msg.sender) onlyInitialised {\\n // Check refund balance\\n require(nodeRefundBalance > 0, "No amount of the node deposit is available for refund");\\n // If this minipool was distributed by a user, force finalisation on the node operator\\n if (!finalised && userDistributed) {\\n \\_finalise();\\n }\\n // Refund node\\n \\_refund();\\n}\\n```\\n
Sparse documentation and accounting complexity Acknowledged
medium
Throughout the project, inline documentation is either sparse or missing altogether. Furthermore, few technical documents about the system's design rationale are available. The recent releases' increased complexity makes it significantly harder to trace the flow of funds through the system as components change semantics, are split into separate contracts, etc.\\nIt is essential that documentation not only outlines what is being done but also why and what a function's role in the system's “bigger picture” is. Many comments in the code base fail to fulfill this requirement and are thus redundant, e.g.\\n```\\n// Sanity check that refund balance is zero\\nrequire(nodeRefundBalance == 0, "Refund balance not zero");\\n```\\n\\n```\\n// Remove from vacant set\\nrocketMinipoolManager.removeVacantMinipool();\\n```\\n\\n```\\nif (ownerCalling) {\\n // Finalise the minipool if the owner is calling\\n \\_finalise();\\n```\\n\\nThe increased complexity and lack of documentation can increase the likelihood of developer error. Furthermore, the time spent maintaining the code and introducing new developers to the code base will drastically increase. This effect can be especially problematic in the system's accounting of funds as the various stages of a Minipool imply different flows of funds and interactions with external dependencies. Documentation should explain the rationale behind specific hardcoded values, such as the magic `8 ether` boundary for withdrawal detection. An example of a lack of documentation and distribution across components is the calculation and influence of `ethMatched` as it plays a role in:\\nthe minipool bond reducer,\\nthe node deposit contract,\\nthe node manager, and\\nthe node staking contract.
As the Rocketpool system grows in complexity, we highly recommend significantly increasing the number of inline comments and general technical documentation and exploring ways to centralize the system's accounting further to provide a clear picture of which funds move where and at what point in time. Where the flow of funds is obscured because multiple components or multi-step processes are involved, we recommend adding extensive inline documentation to give context.
null
```\\n// Sanity check that refund balance is zero\\nrequire(nodeRefundBalance == 0, "Refund balance not zero");\\n```\\n
RocketNodeDistributor - Missing extcodesize check in dynamic proxy Won't Fix
medium
`RocketNodeDistributor` dynamically retrieves the currently set delegate from the centralized `RocketStorage` contract. The target contract (delegate) is resolved inside the fallback function. It may return `address(0)`. `rocketStorage.getAddress()` does not enforce that the requested settings key exists, which may lead to `RocketNodeDistributor` delegate-calling into `address(0)`, which returns no error. This might stay undetected when calling `RocketNodeDistributorDelegate.distribute()` as the method does not return a value, which is consistent with calling a target address with no code.\\n```\\nfallback() external payable {\\n address \\_target = rocketStorage.getAddress(distributorStorageKey);\\n assembly {\\n calldatacopy(0x0, 0x0, calldatasize())\\n let result := delegatecall(gas(), \\_target, 0x0, calldatasize(), 0x0, 0)\\n returndatacopy(0x0, 0x0, returndatasize())\\n switch result case 0 {revert(0, returndatasize())} default {return (0, returndatasize())}\\n }\\n}\\n```\\n\\n```\\nfunction getAddress(bytes32 \\_key) override external view returns (address r) {\\n return addressStorage[\\_key];\\n}\\n```\\n
Before delegate-calling into the target contract, check if it exists.\\n```\\nassembly {\\n codeSize := extcodesize(\\_target)\\n}\\nrequire(codeSize > 0);\\n```\\n
null
```\\nfallback() external payable {\\n address \\_target = rocketStorage.getAddress(distributorStorageKey);\\n assembly {\\n calldatacopy(0x0, 0x0, calldatasize())\\n let result := delegatecall(gas(), \\_target, 0x0, calldatasize(), 0x0, 0)\\n returndatacopy(0x0, 0x0, returndatasize())\\n switch result case 0 {revert(0, returndatasize())} default {return (0, returndatasize())}\\n }\\n}\\n```\\n
Kicked oDAO members' votes taken into account Acknowledged
medium
oDAO members can vote on proposals or submit external data to the system, acting as an oracle. Data submission is based on a vote by itself, and multiple oDAO members must submit the same data until a configurable threshold (51% by default) is reached for the data to be confirmed.\\nWhen a member gets kicked or leaves the oDAO after voting, their vote is still accounted for while the total number of oDAO members decreases.\\nA (group of) malicious oDAO actors may exploit this fact to artificially lower the consensus threshold by voting for a proposal and then leaving the oDAO. This will leave excess votes with the proposal while the total member count decreases.\\nFor example, let's assume there are 17 oDAO members. 9 members must vote for the proposal for it to pass (52.9%). Let's assume 8 members voted for, and the rest abstained and is against the proposal (47%, threshold not met). The proposal is unlikely to pass unless two malicious oDAO members leave the DAO, lowering the member count to 15 in an attempt to manipulate the vote, suddenly inflating vote power from 8/17 (47%; rejected) to 8/15 (53.3%; passed).\\nThe crux is that the votes of ex-oDAO members still count, while the quorum is based on the current oDAO member number.\\nHere are some examples, however, this is a general pattern used for oDAO votes in the system.\\nExample: RocketNetworkPrices\\nMembers submit votes via `submitPrices()`. If the threshold is reached, the proposal is executed. Quorum is based on the current oDAO member count, votes of ex-oDAO members are still accounted for. If a proposal is a near miss, malicious actors can force execute it by leaving the oDAO, lowering the threshold, and then calling `executeUpdatePrices()` to execute it.\\n```\\nRocketDAONodeTrustedInterface rocketDAONodeTrusted = RocketDAONodeTrustedInterface(getContractAddress("rocketDAONodeTrusted"));\\nif (calcBase.mul(submissionCount).div(rocketDAONodeTrusted.getMemberCount()) >= rocketDAOProtocolSettingsNetwork.getNodeConsensusThreshold()) {\\n // Update the price\\n updatePrices(\\_block, \\_rplPrice);\\n}\\n```\\n\\n```\\nfunction executeUpdatePrices(uint256 \\_block, uint256 \\_rplPrice) override external onlyLatestContract("rocketNetworkPrices", address(this)) {\\n // Check settings\\n```\\n\\nRocketMinipoolBondReducer\\nThe `RocketMinipoolBondReducer` contract's `voteCancelReduction` function takes old votes of previously kicked oDAO members into account. This results in the vote being significantly higher and increases the potential for malicious actors, even after their removal, to sway the vote. Note that a canceled bond reduction cannot be undone.\\n```\\nRocketDAONodeTrustedSettingsMinipoolInterface rocketDAONodeTrustedSettingsMinipool = RocketDAONodeTrustedSettingsMinipoolInterface(getContractAddress("rocketDAONodeTrustedSettingsMinipool"));\\nuint256 quorum = rocketDAONode.getMemberCount().mul(rocketDAONodeTrustedSettingsMinipool.getCancelBondReductionQuorum()).div(calcBase);\\nbytes32 totalCancelVotesKey = keccak256(abi.encodePacked("minipool.bond.reduction.vote.count", \\_minipoolAddress));\\nuint256 totalCancelVotes = getUint(totalCancelVotesKey).add(1);\\nif (totalCancelVotes > quorum) {\\n```\\n\\nRocketNetworkPenalties\\n```\\nRocketDAONodeTrustedInterface rocketDAONodeTrusted = RocketDAONodeTrustedInterface(getContractAddress("rocketDAONodeTrusted"));\\nif (calcBase.mul(submissionCount).div(rocketDAONodeTrusted.getMemberCount()) >= rocketDAOProtocolSettingsNetwork.getNodePenaltyThreshold()) {\\n setBool(executedKey, true);\\n incrementMinipoolPenaltyCount(\\_minipoolAddress);\\n}\\n```\\n\\n```\\n// Executes incrementMinipoolPenaltyCount if consensus threshold is reached\\nfunction executeUpdatePenalty(address \\_minipoolAddress, uint256 \\_block) override external onlyLatestContract("rocketNetworkPenalties", address(this)) {\\n // Get contracts\\n RocketDAOProtocolSettingsNetworkInterface rocketDAOProtocolSettingsNetwork = RocketDAOProtocolSettingsNetworkInterface(getContractAddress("rocketDAOProtocolSettingsNetwork"));\\n // Get submission keys\\n```\\n
Track oDAO members' votes and remove them from the tally when the removal from the oDAO is executed.
null
```\\nRocketDAONodeTrustedInterface rocketDAONodeTrusted = RocketDAONodeTrustedInterface(getContractAddress("rocketDAONodeTrusted"));\\nif (calcBase.mul(submissionCount).div(rocketDAONodeTrusted.getMemberCount()) >= rocketDAOProtocolSettingsNetwork.getNodeConsensusThreshold()) {\\n // Update the price\\n updatePrices(\\_block, \\_rplPrice);\\n}\\n```\\n
RocketDAOProtocolSettingsRewards - settings key collission Acknowledged
medium
A malicious user may craft a DAO protocol proposal to set a rewards claimer for a specific contract, thus overwriting another contract's settings. This issue arises due to lax requirements when choosing safe settings keys.\\n```\\nfunction setSettingRewardsClaimer(string memory \\_contractName, uint256 \\_perc) override public onlyDAOProtocolProposal {\\n // Get the total perc set, can't be more than 100\\n uint256 percTotal = getRewardsClaimersPercTotal();\\n // If this group already exists, it will update the perc\\n uint256 percTotalUpdate = percTotal.add(\\_perc).sub(getRewardsClaimerPerc(\\_contractName));\\n // Can't be more than a total claim amount of 100%\\n require(percTotalUpdate <= 1 ether, "Claimers cannot total more than 100%");\\n // Update the total\\n setUint(keccak256(abi.encodePacked(settingNameSpace,"rewards.claims", "group.totalPerc")), percTotalUpdate);\\n // Update/Add the claimer amount\\n setUint(keccak256(abi.encodePacked(settingNameSpace, "rewards.claims", "group.amount", \\_contractName)), \\_perc);\\n // Set the time it was updated at\\n setUint(keccak256(abi.encodePacked(settingNameSpace, "rewards.claims", "group.amount.updated.time", \\_contractName)), block.timestamp);\\n}\\n```\\n\\nThe method updates the rewards claimer for a specific contract by writing to the following two setting keys:\\n`settingNameSpace.rewards.claimsgroup.amount<_contractName>`\\n`settingNameSpace.rewards.claimsgroup.amount.updated.time<_contractName>`\\nDue to the way the settings hierarchy was chosen in this case, a malicious proposal might define a `<_contractName> = .updated.time<targetContract>` that overwrites the settings of a different contract with an invalid value.\\nNote that the issue of delimiter consistency is also discussed in issue 5.12.\\nThe severity rating is based on the fact that this should be detectable by DAO members. However, following a defense-in-depth approach means that such collisions should be avoided wherever possible.
We recommend enforcing a unique prefix and delimiter when concatenating user-provided input to setting keys. In this specific case, the settings could be renamed as follows:\\n`settingNameSpace.rewards.claimsgroup.amount.value<_contractName>`\\n`settingNameSpace.rewards.claimsgroup.amount.updated.time<_contractName>`
null
```\\nfunction setSettingRewardsClaimer(string memory \\_contractName, uint256 \\_perc) override public onlyDAOProtocolProposal {\\n // Get the total perc set, can't be more than 100\\n uint256 percTotal = getRewardsClaimersPercTotal();\\n // If this group already exists, it will update the perc\\n uint256 percTotalUpdate = percTotal.add(\\_perc).sub(getRewardsClaimerPerc(\\_contractName));\\n // Can't be more than a total claim amount of 100%\\n require(percTotalUpdate <= 1 ether, "Claimers cannot total more than 100%");\\n // Update the total\\n setUint(keccak256(abi.encodePacked(settingNameSpace,"rewards.claims", "group.totalPerc")), percTotalUpdate);\\n // Update/Add the claimer amount\\n setUint(keccak256(abi.encodePacked(settingNameSpace, "rewards.claims", "group.amount", \\_contractName)), \\_perc);\\n // Set the time it was updated at\\n setUint(keccak256(abi.encodePacked(settingNameSpace, "rewards.claims", "group.amount.updated.time", \\_contractName)), block.timestamp);\\n}\\n```\\n
RocketDAOProtocolSettingsRewards - missing setting delimiters Acknowledged
medium
Settings in the Rocket Pool system are hierarchical, and namespaces are prefixed using dot delimiters.\\nCalling `abi.encodePacked(<string>, <string>)` on strings performs a simple concatenation. According to the settings' naming scheme, it is suggested that the following example writes to a key named: `<settingNameSpace>.rewards.claims.group.amount.<_contractName>`. However, due to missing delimiters, the actual key written to is: `<settingNameSpace>.rewards.claimsgroup.amount<_contractName>`.\\nNote that there is no delimiter between `claims|group` and `amount|<_contractName>`.\\n```\\nfunction setSettingRewardsClaimer(string memory \\_contractName, uint256 \\_perc) override public onlyDAOProtocolProposal {\\n // Get the total perc set, can't be more than 100\\n uint256 percTotal = getRewardsClaimersPercTotal();\\n // If this group already exists, it will update the perc\\n uint256 percTotalUpdate = percTotal.add(\\_perc).sub(getRewardsClaimerPerc(\\_contractName));\\n // Can't be more than a total claim amount of 100%\\n require(percTotalUpdate <= 1 ether, "Claimers cannot total more than 100%");\\n // Update the total\\n setUint(keccak256(abi.encodePacked(settingNameSpace,"rewards.claims", "group.totalPerc")), percTotalUpdate);\\n // Update/Add the claimer amount\\n setUint(keccak256(abi.encodePacked(settingNameSpace, "rewards.claims", "group.amount", \\_contractName)), \\_perc);\\n // Set the time it was updated at\\n setUint(keccak256(abi.encodePacked(settingNameSpace, "rewards.claims", "group.amount.updated.time", \\_contractName)), block.timestamp);\\n}\\n```\\n
We recommend adding the missing intermediate delimiters. The system should enforce delimiters after the last setting key before user input is concatenated to reduce the risk of accidental namespace collisions.
null
```\\nfunction setSettingRewardsClaimer(string memory \\_contractName, uint256 \\_perc) override public onlyDAOProtocolProposal {\\n // Get the total perc set, can't be more than 100\\n uint256 percTotal = getRewardsClaimersPercTotal();\\n // If this group already exists, it will update the perc\\n uint256 percTotalUpdate = percTotal.add(\\_perc).sub(getRewardsClaimerPerc(\\_contractName));\\n // Can't be more than a total claim amount of 100%\\n require(percTotalUpdate <= 1 ether, "Claimers cannot total more than 100%");\\n // Update the total\\n setUint(keccak256(abi.encodePacked(settingNameSpace,"rewards.claims", "group.totalPerc")), percTotalUpdate);\\n // Update/Add the claimer amount\\n setUint(keccak256(abi.encodePacked(settingNameSpace, "rewards.claims", "group.amount", \\_contractName)), \\_perc);\\n // Set the time it was updated at\\n setUint(keccak256(abi.encodePacked(settingNameSpace, "rewards.claims", "group.amount.updated.time", \\_contractName)), block.timestamp);\\n}\\n```\\n
Use of address instead of specific contract types Acknowledged
low
Rather than using a low-level `address` type and then casting to the safer contract type, it's better to use the best type available by default so the compiler can eventually check for type safety and contract existence and only downcast to less secure low-level types (address) when necessary.\\n`RocketStorageInterface _rocketStorage` should be declared in the arguments, removing the need to cast the address explicitly.\\n```\\n/// @notice Sets up starting delegate contract and then delegates initialisation to it\\nfunction initialise(address \\_rocketStorage, address \\_nodeAddress) external override notSelf {\\n // Check input\\n require(\\_nodeAddress != address(0), "Invalid node address");\\n require(storageState == StorageState.Undefined, "Already initialised");\\n // Set storage state to uninitialised\\n storageState = StorageState.Uninitialised;\\n // Set rocketStorage\\n rocketStorage = RocketStorageInterface(\\_rocketStorage);\\n```\\n\\n`RocketMinipoolInterface _minipoolAddress` should be declared in the arguments, removing the need to cast the address explicitly. Downcast to low-level address if needed. The event can be redeclared with the contract type.\\n```\\nfunction beginReduceBondAmount(address \\_minipoolAddress, uint256 \\_newBondAmount) override external onlyLatestContract("rocketMinipoolBondReducer", address(this)) {\\n RocketMinipoolInterface minipool = RocketMinipoolInterface(\\_minipoolAddress);\\n```\\n\\n```\\n/// @notice Returns whether owner of given minipool can reduce bond amount given the waiting period constraint\\n/// @param \\_minipoolAddress Address of the minipool\\nfunction canReduceBondAmount(address \\_minipoolAddress) override public view returns (bool) {\\n RocketMinipoolInterface minipool = RocketMinipoolInterface(\\_minipoolAddress);\\n RocketDAONodeTrustedSettingsMinipoolInterface rocketDAONodeTrustedSettingsMinipool = RocketDAONodeTrustedSettingsMinipoolInterface(getContractAddress("rocketDAONodeTrustedSettingsMinipool"));\\n uint256 reduceBondTime = getUint(keccak256(abi.encodePacked("minipool.bond.reduction.time", \\_minipoolAddress)));\\n return rocketDAONodeTrustedSettingsMinipool.isWithinBondReductionWindow(block.timestamp.sub(reduceBondTime));\\n}\\n```\\n\\n```\\nfunction voteCancelReduction(address \\_minipoolAddress) override external onlyTrustedNode(msg.sender) onlyLatestContract("rocketMinipoolBondReducer", address(this)) {\\n // Prevent calling if consensus has already been reached\\n require(!getReduceBondCancelled(\\_minipoolAddress), "Already cancelled");\\n // Get contracts\\n RocketMinipoolInterface minipool = RocketMinipoolInterface(\\_minipoolAddress);\\n```\\n\\nNote that `abi.encode*(contractType)` assumes `address` for contract types by default. An explicit downcast is not required.\\n```\\n » Test example = Test(0x5B38Da6a701c568545dCfcB03FcB875f56beddC4)\\n » abi.encodePacked("hi", example)\\n0x68695b38da6a701c568545dcfcb03fcb875f56beddc4\\n » abi.encodePacked("hi", address(example))\\n0x68695b38da6a701c568545dcfcb03fcb875f56beddc4\\n```\\n\\nMore examples of `address _minipool` declarations:\\n```\\n/// @dev Internal logic to set a minipool's pubkey\\n/// @param \\_pubkey The pubkey to set for the calling minipool\\nfunction \\_setMinipoolPubkey(address \\_minipool, bytes calldata \\_pubkey) private {\\n // Load contracts\\n AddressSetStorageInterface addressSetStorage = AddressSetStorageInterface(getContractAddress("addressSetStorage"));\\n // Initialize minipool & get properties\\n RocketMinipoolInterface minipool = RocketMinipoolInterface(\\_minipool);\\n```\\n\\n```\\nfunction getMinipoolDetails(address \\_minipoolAddress) override external view returns (MinipoolDetails memory) {\\n // Get contracts\\n RocketMinipoolInterface minipoolInterface = RocketMinipoolInterface(\\_minipoolAddress);\\n RocketMinipoolBase minipool = RocketMinipoolBase(payable(\\_minipoolAddress));\\n RocketNetworkPenaltiesInterface rocketNetworkPenalties = RocketNetworkPenaltiesInterface(getContractAddress("rocketNetworkPenalties"));\\n```\\n\\nMore examples of `RocketStorageInterface _rocketStorage` casts:\\n```\\ncontract RocketNodeDistributor is RocketNodeDistributorStorageLayout {\\n bytes32 immutable distributorStorageKey;\\n\\n constructor(address \\_nodeAddress, address \\_rocketStorage) {\\n rocketStorage = RocketStorageInterface(\\_rocketStorage);\\n nodeAddress = \\_nodeAddress;\\n```\\n
We recommend using more specific types instead of `address` where possible. Downcast if necessary. This goes for parameter types as well as state variable types.
null
```\\n/// @notice Sets up starting delegate contract and then delegates initialisation to it\\nfunction initialise(address \\_rocketStorage, address \\_nodeAddress) external override notSelf {\\n // Check input\\n require(\\_nodeAddress != address(0), "Invalid node address");\\n require(storageState == StorageState.Undefined, "Already initialised");\\n // Set storage state to uninitialised\\n storageState = StorageState.Uninitialised;\\n // Set rocketStorage\\n rocketStorage = RocketStorageInterface(\\_rocketStorage);\\n```\\n
Redundant double casts Acknowledged
low
`_rocketStorageAddress` is already of contract type `RocketStorageInterface`.\\n```\\n/// @dev Set the main Rocket Storage address\\nconstructor(RocketStorageInterface \\_rocketStorageAddress) {\\n // Update the contract address\\n rocketStorage = RocketStorageInterface(\\_rocketStorageAddress);\\n}\\n```\\n\\n`_tokenAddress` is already of contract type `ERC20Burnable`.\\n```\\nfunction burnToken(ERC20Burnable \\_tokenAddress, uint256 \\_amount) override external onlyLatestNetworkContract {\\n // Get contract key\\n bytes32 contractKey = keccak256(abi.encodePacked(getContractName(msg.sender), \\_tokenAddress));\\n // Update balances\\n tokenBalances[contractKey] = tokenBalances[contractKey].sub(\\_amount);\\n // Get the token ERC20 instance\\n ERC20Burnable tokenContract = ERC20Burnable(\\_tokenAddress);\\n```\\n\\n`_rocketTokenRPLFixedSupplyAddress` is already of contract type `IERC20`.\\n```\\nconstructor(RocketStorageInterface \\_rocketStorageAddress, IERC20 \\_rocketTokenRPLFixedSupplyAddress) RocketBase(\\_rocketStorageAddress) ERC20("Rocket Pool Protocol", "RPL") {\\n // Version\\n version = 1;\\n // Set the mainnet RPL fixed supply token address\\n rplFixedSupplyContract = IERC20(\\_rocketTokenRPLFixedSupplyAddress);\\n```\\n
We recommend removing the unnecessary double casts and copies of local variables.
null
```\\n/// @dev Set the main Rocket Storage address\\nconstructor(RocketStorageInterface \\_rocketStorageAddress) {\\n // Update the contract address\\n rocketStorage = RocketStorageInterface(\\_rocketStorageAddress);\\n}\\n```\\n
RocketMinipoolDelegate - Missing event in prepareVacancy
low
The function `prepareVacancy` updates multiple contract state variables and should therefore emit an event.\\n```\\n/// @dev Sets the bond value and vacancy flag on this minipool\\n/// @param \\_bondAmount The bond amount selected by the node operator\\n/// @param \\_currentBalance The current balance of the validator on the beaconchain (will be checked by oDAO and scrubbed if not correct)\\nfunction prepareVacancy(uint256 \\_bondAmount, uint256 \\_currentBalance) override external onlyLatestContract("rocketMinipoolManager", msg.sender) onlyInitialised {\\n // Check status\\n require(status == MinipoolStatus.Initialised, "Must be in initialised status");\\n // Sanity check that refund balance is zero\\n require(nodeRefundBalance == 0, "Refund balance not zero");\\n // Check balance\\n RocketDAOProtocolSettingsMinipoolInterface rocketDAOProtocolSettingsMinipool = RocketDAOProtocolSettingsMinipoolInterface(getContractAddress("rocketDAOProtocolSettingsMinipool"));\\n uint256 launchAmount = rocketDAOProtocolSettingsMinipool.getLaunchBalance();\\n require(\\_currentBalance >= launchAmount, "Balance is too low");\\n // Store bond amount\\n nodeDepositBalance = \\_bondAmount;\\n // Calculate user amount from launch amount\\n userDepositBalance = launchAmount.sub(nodeDepositBalance);\\n // Flag as vacant\\n vacant = true;\\n preMigrationBalance = \\_currentBalance;\\n // Refund the node whatever rewards they have accrued prior to becoming a RP validator\\n nodeRefundBalance = \\_currentBalance.sub(launchAmount);\\n // Set status to preLaunch\\n setStatus(MinipoolStatus.Prelaunch);\\n}\\n```\\n
Emit the missing event.
null
```\\n/// @dev Sets the bond value and vacancy flag on this minipool\\n/// @param \\_bondAmount The bond amount selected by the node operator\\n/// @param \\_currentBalance The current balance of the validator on the beaconchain (will be checked by oDAO and scrubbed if not correct)\\nfunction prepareVacancy(uint256 \\_bondAmount, uint256 \\_currentBalance) override external onlyLatestContract("rocketMinipoolManager", msg.sender) onlyInitialised {\\n // Check status\\n require(status == MinipoolStatus.Initialised, "Must be in initialised status");\\n // Sanity check that refund balance is zero\\n require(nodeRefundBalance == 0, "Refund balance not zero");\\n // Check balance\\n RocketDAOProtocolSettingsMinipoolInterface rocketDAOProtocolSettingsMinipool = RocketDAOProtocolSettingsMinipoolInterface(getContractAddress("rocketDAOProtocolSettingsMinipool"));\\n uint256 launchAmount = rocketDAOProtocolSettingsMinipool.getLaunchBalance();\\n require(\\_currentBalance >= launchAmount, "Balance is too low");\\n // Store bond amount\\n nodeDepositBalance = \\_bondAmount;\\n // Calculate user amount from launch amount\\n userDepositBalance = launchAmount.sub(nodeDepositBalance);\\n // Flag as vacant\\n vacant = true;\\n preMigrationBalance = \\_currentBalance;\\n // Refund the node whatever rewards they have accrued prior to becoming a RP validator\\n nodeRefundBalance = \\_currentBalance.sub(launchAmount);\\n // Set status to preLaunch\\n setStatus(MinipoolStatus.Prelaunch);\\n}\\n```\\n
RocketMinipool - Inconsistent access control modifier declaration onlyMinipoolOwner Acknowledged
low
The access control modifier `onlyMinipoolOwner` should be renamed to `onlyMinipoolOwnerOrWithdrawalAddress` to be consistent with the actual check permitting the owner or the withdrawal address to interact with the function. This would also be consistent with other declarations in the codebase.\\nExample\\nThe `onlyMinipoolOwner` modifier in `RocketMinipoolBase` is the same as `onlyMinipoolOwnerOrWithdrawalAddress` in other modules.\\n```\\n/// @dev Only allow access from the owning node address\\nmodifier onlyMinipoolOwner() {\\n // Only the node operator can upgrade\\n address withdrawalAddress = rocketStorage.getNodeWithdrawalAddress(nodeAddress);\\n require(msg.sender == nodeAddress || msg.sender == withdrawalAddress, "Only the node operator can access this method");\\n \\_;\\n}\\n```\\n\\n```\\n// Only allow access from the owning node address\\nmodifier onlyMinipoolOwner() {\\n // Only the node operator can upgrade\\n address withdrawalAddress = rocketStorage.getNodeWithdrawalAddress(nodeAddress);\\n require(msg.sender == nodeAddress || msg.sender == withdrawalAddress, "Only the node operator can access this method");\\n \\_;\\n}\\n```\\n\\nOther declarations:\\n```\\n/// @dev Only allow access from the owning node address\\nmodifier onlyMinipoolOwner(address \\_nodeAddress) {\\n require(\\_nodeAddress == nodeAddress, "Invalid minipool owner");\\n \\_;\\n}\\n\\n/// @dev Only allow access from the owning node address or their withdrawal address\\nmodifier onlyMinipoolOwnerOrWithdrawalAddress(address \\_nodeAddress) {\\n require(\\_nodeAddress == nodeAddress || \\_nodeAddress == rocketStorage.getNodeWithdrawalAddress(nodeAddress), "Invalid minipool owner");\\n \\_;\\n}\\n```\\n\\n```\\n// Only allow access from the owning node address\\nmodifier onlyMinipoolOwner(address \\_nodeAddress) {\\n require(\\_nodeAddress == nodeAddress, "Invalid minipool owner");\\n \\_;\\n}\\n\\n// Only allow access from the owning node address or their withdrawal address\\nmodifier onlyMinipoolOwnerOrWithdrawalAddress(address \\_nodeAddress) {\\n require(\\_nodeAddress == nodeAddress || \\_nodeAddress == rocketStorage.getNodeWithdrawalAddress(nodeAddress), "Invalid minipool owner");\\n \\_;\\n}\\n```\\n
Resolution\\nAcknowledged by the client. Not addressed within rocket-pool/[email protected]77d7cca\\nAgreed. This would change a lot of contracts just for a minor improvement in readbility.\\nWe recommend renaming `RocketMinipoolBase.onlyMinipoolOwner` to `RocketMinipoolBase.onlyMinipoolOwnerOrWithdrawalAddress`.
null
```\\n/// @dev Only allow access from the owning node address\\nmodifier onlyMinipoolOwner() {\\n // Only the node operator can upgrade\\n address withdrawalAddress = rocketStorage.getNodeWithdrawalAddress(nodeAddress);\\n require(msg.sender == nodeAddress || msg.sender == withdrawalAddress, "Only the node operator can access this method");\\n \\_;\\n}\\n```\\n
RocketDAO*Settings - settingNameSpace should be immutable Acknowledged
low
The `settingNameSpace` in the abstract contract `RocketDAONodeTrustedSettings` is only set on contract deployment. Hence, the fields should be declared immutable to make clear that the settings namespace cannot change after construction.\\n`RocketDAONodeTrustedSettings`\\n```\\n// The namespace for a particular group of settings\\nbytes32 settingNameSpace;\\n```\\n\\n```\\n// Construct\\nconstructor(RocketStorageInterface \\_rocketStorageAddress, string memory \\_settingNameSpace) RocketBase(\\_rocketStorageAddress) {\\n // Apply the setting namespace\\n settingNameSpace = keccak256(abi.encodePacked("dao.trustednodes.setting.", \\_settingNameSpace));\\n}\\n```\\n\\n`RocketDAOProtocolSettings`\\n```\\n// The namespace for a particular group of settings\\nbytes32 settingNameSpace;\\n```\\n\\n```\\n// Construct\\nconstructor(RocketStorageInterface \\_rocketStorageAddress, string memory \\_settingNameSpace) RocketBase(\\_rocketStorageAddress) {\\n // Apply the setting namespace\\n settingNameSpace = keccak256(abi.encodePacked("dao.protocol.setting.", \\_settingNameSpace));\\n}\\n```\\n\\n```\\nconstructor(RocketStorageInterface \\_rocketStorageAddress) RocketDAOProtocolSettings(\\_rocketStorageAddress, "auction") {\\n // Set version\\n version = 1;\\n```\\n
We recommend using the `immutable` annotation in Solidity (see Immutable).
null
```\\n// The namespace for a particular group of settings\\nbytes32 settingNameSpace;\\n```\\n
Kicked oDAO members' votes taken into account Acknowledged
medium
oDAO members can vote on proposals or submit external data to the system, acting as an oracle. Data submission is based on a vote by itself, and multiple oDAO members must submit the same data until a configurable threshold (51% by default) is reached for the data to be confirmed.\\nWhen a member gets kicked or leaves the oDAO after voting, their vote is still accounted for while the total number of oDAO members decreases.\\nA (group of) malicious oDAO actors may exploit this fact to artificially lower the consensus threshold by voting for a proposal and then leaving the oDAO. This will leave excess votes with the proposal while the total member count decreases.\\nFor example, let's assume there are 17 oDAO members. 9 members must vote for the proposal for it to pass (52.9%). Let's assume 8 members voted for, and the rest abstained and is against the proposal (47%, threshold not met). The proposal is unlikely to pass unless two malicious oDAO members leave the DAO, lowering the member count to 15 in an attempt to manipulate the vote, suddenly inflating vote power from 8/17 (47%; rejected) to 8/15 (53.3%; passed).\\nThe crux is that the votes of ex-oDAO members still count, while the quorum is based on the current oDAO member number.\\nHere are some examples, however, this is a general pattern used for oDAO votes in the system.\\nExample: RocketNetworkPrices\\nMembers submit votes via `submitPrices()`. If the threshold is reached, the proposal is executed. Quorum is based on the current oDAO member count, votes of ex-oDAO members are still accounted for. If a proposal is a near miss, malicious actors can force execute it by leaving the oDAO, lowering the threshold, and then calling `executeUpdatePrices()` to execute it.\\n```\\nRocketDAONodeTrustedInterface rocketDAONodeTrusted = RocketDAONodeTrustedInterface(getContractAddress("rocketDAONodeTrusted"));\\nif (calcBase.mul(submissionCount).div(rocketDAONodeTrusted.getMemberCount()) >= rocketDAOProtocolSettingsNetwork.getNodeConsensusThreshold()) {\\n // Update the price\\n updatePrices(\\_block, \\_rplPrice);\\n}\\n```\\n\\n```\\nfunction executeUpdatePrices(uint256 \\_block, uint256 \\_rplPrice) override external onlyLatestContract("rocketNetworkPrices", address(this)) {\\n // Check settings\\n```\\n\\nRocketMinipoolBondReducer\\nThe `RocketMinipoolBondReducer` contract's `voteCancelReduction` function takes old votes of previously kicked oDAO members into account. This results in the vote being significantly higher and increases the potential for malicious actors, even after their removal, to sway the vote. Note that a canceled bond reduction cannot be undone.\\n```\\nRocketDAONodeTrustedSettingsMinipoolInterface rocketDAONodeTrustedSettingsMinipool = RocketDAONodeTrustedSettingsMinipoolInterface(getContractAddress("rocketDAONodeTrustedSettingsMinipool"));\\nuint256 quorum = rocketDAONode.getMemberCount().mul(rocketDAONodeTrustedSettingsMinipool.getCancelBondReductionQuorum()).div(calcBase);\\nbytes32 totalCancelVotesKey = keccak256(abi.encodePacked("minipool.bond.reduction.vote.count", \\_minipoolAddress));\\nuint256 totalCancelVotes = getUint(totalCancelVotesKey).add(1);\\nif (totalCancelVotes > quorum) {\\n```\\n\\nRocketNetworkPenalties\\n```\\nRocketDAONodeTrustedInterface rocketDAONodeTrusted = RocketDAONodeTrustedInterface(getContractAddress("rocketDAONodeTrusted"));\\nif (calcBase.mul(submissionCount).div(rocketDAONodeTrusted.getMemberCount()) >= rocketDAOProtocolSettingsNetwork.getNodePenaltyThreshold()) {\\n setBool(executedKey, true);\\n incrementMinipoolPenaltyCount(\\_minipoolAddress);\\n}\\n```\\n\\n```\\n// Executes incrementMinipoolPenaltyCount if consensus threshold is reached\\nfunction executeUpdatePenalty(address \\_minipoolAddress, uint256 \\_block) override external onlyLatestContract("rocketNetworkPenalties", address(this)) {\\n // Get contracts\\n RocketDAOProtocolSettingsNetworkInterface rocketDAOProtocolSettingsNetwork = RocketDAOProtocolSettingsNetworkInterface(getContractAddress("rocketDAOProtocolSettingsNetwork"));\\n // Get submission keys\\n```\\n
Track oDAO members' votes and remove them from the tally when the removal from the oDAO is executed.
null
```\\nRocketDAONodeTrustedInterface rocketDAONodeTrusted = RocketDAONodeTrustedInterface(getContractAddress("rocketDAONodeTrusted"));\\nif (calcBase.mul(submissionCount).div(rocketDAONodeTrusted.getMemberCount()) >= rocketDAOProtocolSettingsNetwork.getNodeConsensusThreshold()) {\\n // Update the price\\n updatePrices(\\_block, \\_rplPrice);\\n}\\n```\\n
didTransferShares function has no access control modifier
high
The staked tokens (shares) in Forta are meant to be transferable. Similarly, the rewards allocation for these shares for delegated staking is meant to be transferable as well. This allocation for the shares' owner is tracked in the `StakeAllocator`. To enable this, the Forta staking contract `FortaStaking` implements a `_beforeTokenTransfer()` function that calls `_allocator.didTransferShares()` when it is appropriate to transfer the underlying allocation.\\n```\\nfunction \\_beforeTokenTransfer(\\n address operator,\\n address from,\\n address to,\\n uint256[] memory ids,\\n uint256[] memory amounts,\\n bytes memory data\\n) internal virtual override {\\n for (uint256 i = 0; i < ids.length; i++) {\\n if (FortaStakingUtils.isActive(ids[i])) {\\n uint8 subjectType = FortaStakingUtils.subjectTypeOfShares(ids[i]);\\n if (subjectType == DELEGATOR\\_NODE\\_RUNNER\\_SUBJECT && to != address(0) && from != address(0)) {\\n \\_allocator.didTransferShares(ids[i], subjectType, from, to, amounts[i]);\\n }\\n```\\n\\nDue to this, the `StakeAllocator.didTransferShares()` has an `external` visibility so it can be called from the `FortaStaking` contract to perform transfers. However, there is no access control modifier to allow only the staking contract to call this. Therefore, anyone can call this function with whatever parameters they want.\\n```\\nfunction didTransferShares(\\n uint256 sharesId,\\n uint8 subjectType,\\n address from,\\n address to,\\n uint256 sharesAmount\\n) external {\\n \\_rewardsDistributor.didTransferShares(sharesId, subjectType, from, to, sharesAmount);\\n}\\n```\\n\\nSince the allocation isn't represented as a token standard and is tracked directly in the `StakeAllocator` and `RewardsDistributor`, it lacks many standard checks that would prevent abuse of the function. For example, this function does not have a check for allowance or `msg.sender==from`, so any user could call `didTransferShares()` with `to` being their address and `from` being any address they want `to` transfer allocation `from`, and the call would succeed.
Apply access control modifiers as appropriate for this contract, for example `onlyRole()`.
null
```\\nfunction \\_beforeTokenTransfer(\\n address operator,\\n address from,\\n address to,\\n uint256[] memory ids,\\n uint256[] memory amounts,\\n bytes memory data\\n) internal virtual override {\\n for (uint256 i = 0; i < ids.length; i++) {\\n if (FortaStakingUtils.isActive(ids[i])) {\\n uint8 subjectType = FortaStakingUtils.subjectTypeOfShares(ids[i]);\\n if (subjectType == DELEGATOR\\_NODE\\_RUNNER\\_SUBJECT && to != address(0) && from != address(0)) {\\n \\_allocator.didTransferShares(ids[i], subjectType, from, to, amounts[i]);\\n }\\n```\\n
Incorrect reward epoch start date calculation
high
The Forta rewards system is based on epochs. A privileged address with the role `REWARDER_ROLE` calls the `reward()` function with a parameter for a specific `epochNumber` that consequently distributes the rewards for that epoch. Additionally, as users stake and delegate their stake, accounts in the Forta system accrue weight that is based on the active stake to distribute these rewards. Since accounts can modify their stake as well as delegate or un-delegate it, the rewards weight for each account can be modified, as seen, for example, in the `didAllocate()` function. In turn, this modifies the `DelegatedAccRewards` storage struct that stores the accumulated rewards for each share id. To keep track of changes done to the accumulated rewards, epochs with checkpoints are used to manage the accumulated rate of rewards, their value at the checkpoint, and the timestamp of the checkpoint.\\nFor example, in the `didAllocate()` function the `addRate()` function is being called to modify the accumulated rewards.\\n```\\nfunction didAllocate(\\n uint8 subjectType,\\n uint256 subject,\\n uint256 stakeAmount,\\n uint256 sharesAmount,\\n address staker\\n) external onlyRole(ALLOCATOR\\_CONTRACT\\_ROLE) {\\n bool delegated = getSubjectTypeAgency(subjectType) == SubjectStakeAgency.DELEGATED;\\n if (delegated) {\\n uint8 delegatorType = getDelegatorSubjectType(subjectType);\\n uint256 shareId = FortaStakingUtils.subjectToActive(delegatorType, subject);\\n DelegatedAccRewards storage s = \\_rewardsAccumulators[shareId];\\n s.delegated.addRate(stakeAmount);\\n```\\n\\nThen the function flow goes into `setRate()` that checks the existing accumulated rewards storage and modifies it based on the current timestamp.\\n```\\nfunction addRate(Accumulator storage acc, uint256 rate) internal {\\n setRate(acc, latest(acc).rate + rate);\\n}\\n```\\n\\n```\\nfunction setRate(Accumulator storage acc, uint256 rate) internal {\\n EpochCheckpoint memory ckpt = EpochCheckpoint({ timestamp: SafeCast.toUint32(block.timestamp), rate: SafeCast.toUint224(rate), value: getValue(acc) });\\n uint256 length = acc.checkpoints.length;\\n if (length > 0 && isCurrentEpoch(acc.checkpoints[length - 1].timestamp)) {\\n acc.checkpoints[length - 1] = ckpt;\\n } else {\\n acc.checkpoints.push(ckpt);\\n }\\n}\\n```\\n\\nNamely, it pushes epoch checkpoints to the list of account checkpoints based on its timestamp. If the last checkpoint's timestamp is during the current epoch, then the last checkpoint is replaced with the new one altogether. If the last checkpoint's timestamp is different from the current epoch, a new checkpoint is added to the list. However, the `isCurrentEpoch()` function calls a function `getCurrentEpochTimestamp()` that incorrectly determines the start date of the current epoch. In particular, it doesn't take the offset into account when calculating how many epochs have already passed.\\n```\\nfunction getCurrentEpochTimestamp() internal view returns (uint256) {\\n return ((block.timestamp / EPOCH\\_LENGTH) \\* EPOCH\\_LENGTH) + TIMESTAMP\\_OFFSET;\\n}\\n\\nfunction isCurrentEpoch(uint256 timestamp) internal view returns (bool) {\\n uint256 currentEpochStart = getCurrentEpochTimestamp();\\n return timestamp > currentEpochStart;\\n}\\n```\\n\\nInstead of `((block.timestamp / EPOCH_LENGTH) * EPOCH_LENGTH) + TIMESTAMP_OFFSET`, it should be `(((block.timestamp - TIMESTAMP_OFFSET) / EPOCH_LENGTH) * EPOCH_LENGTH) + TIMESTAMP_OFFSET`. In fact, it should simply call the `getEpochNumber()` function that correctly provides the epoch number for any timestamp.\\n```\\nfunction getEpochNumber(uint256 timestamp) internal pure returns (uint32) {\\n return SafeCast.toUint32((timestamp - TIMESTAMP\\_OFFSET) / EPOCH\\_LENGTH);\\n}\\n```\\n\\nIn other words, the resulting function would look something like the following:\\n```\\n function getCurrentEpochTimestamp() public view returns (uint256) {\\n return (getEpochNumber(block.timestamp) * EPOCH_LENGTH) + TIMESTAMP_OFFSET;\\n }\\n```\\n\\nOtherwise, if `block.timestamp` is such that `(block.timestamp - TIMESTAMP_OFFSET) / EPOCH_LENGTH = n` and `block.timestamp` / EPOCH_LENGTH = n+1, which would happen on roughly 4 out of 7 days of the week since `EPOCH_LENGTH = 1 weeks` and `TIMESTAMP_OFFSET = 4 days`, this would cause the `getCurrentEpochTimestamp()` function to return the end timestamp of the epoch (which is in the future) instead of the start. Therefore, if a checkpoint with such a timestamp is committed to the account's accumulated rewards checkpoints list, it will always fail the below check in the epoch it got submitted, and any checkpoint committed afterwards but during the same epoch with a similar type of `block.timestamp` (i.e. satisfying the condition at the beginning of this paragraph), would be pushed to the top of the list instead of replacing the previous checkpoint.\\n```\\nif (length > 0 && isCurrentEpoch(acc.checkpoints[length - 1].timestamp)) {\\n acc.checkpoints[length - 1] = ckpt;\\n} else {\\n acc.checkpoints.push(ckpt);\\n```\\n\\nThis causes several checkpoints to be stored for the same epoch, which would cause issues in functions such as `getAtEpoch()`, that feeds into `getValueAtEpoch()` function that provides data for the rewards' share calculation. In the end, this would cause issues in the accounting for the rewards calculation resulting in incorrect distributions.\\nDuring the discussion with the Forta Foundation team, it was additionally discovered that there are edge cases around the limits of epochs. Specifically, epoch's end time and the subsequent epoch's start time are exactly the same, although it should be that it is only the start of the next epoch. Similarly, that start time isn't recognized as part of the epoch due to `>` sign instead of `>=`. In particular, the following changes need to be made:\\n```\\n function getEpochEndTimestamp(uint256 epochNumber) public pure returns (uint256) {\\n return ((epochNumber + 1) * EPOCH_LENGTH) + TIMESTAMP_OFFSET - 1; <---- so it is 23:59:59 instead of next day 00:00:00\\n }\\n\\n function isCurrentEpoch(uint256 timestamp) public view returns (bool) {\\n uint256 currentEpochStart = getCurrentEpochTimestamp();\\n return timestamp >= currentEpochStart; <--- for the first second on Monday\\n }\\n```\\n
A refactor of the epoch timestamp calculation functions is recommended to account for:\\nThe correct epoch number to calculate the start and end timestamps of epochs.\\nThe boundaries of epochs coinciding.\\nClarity in functions' intent. For example, adding a function just to calculate any epoch's start time and renaming `getCurrentEpochTimestamp()` to `getCurrentEpochStartTimestamp()`.
null
```\\nfunction didAllocate(\\n uint8 subjectType,\\n uint256 subject,\\n uint256 stakeAmount,\\n uint256 sharesAmount,\\n address staker\\n) external onlyRole(ALLOCATOR\\_CONTRACT\\_ROLE) {\\n bool delegated = getSubjectTypeAgency(subjectType) == SubjectStakeAgency.DELEGATED;\\n if (delegated) {\\n uint8 delegatorType = getDelegatorSubjectType(subjectType);\\n uint256 shareId = FortaStakingUtils.subjectToActive(delegatorType, subject);\\n DelegatedAccRewards storage s = \\_rewardsAccumulators[shareId];\\n s.delegated.addRate(stakeAmount);\\n```\\n
A single unfreeze dismisses all other slashing proposal freezes
high
In order to retaliate against malicious actors, the Forta staking system allows users to submit slashing proposals that are guarded by submitting along a deposit with a slashing reason. These proposals immediately freeze the proposal's subject's stake, blocking them from withdrawing that stake.\\nAt the same time, there can be multiple proposals submitted against the same subject, which works out with freezing - the subject remains frozen with each proposal submitted. However, once any one of the active proposals against the subject gets to the end of its lifecycle, be it `REJECTED`, `DISMISSED`, `EXECUTED`, or `REVERTED`, the subject gets unfrozen altogether. The other proposals might still be active, but the stake is no longer frozen, allowing the subject to withdraw it if they would like.\\nIn terms of impact, this allows bad actors to avoid punishment intended by the slashes and freezes. A malicious actor could, for example, submit a faulty proposal against themselves in the hopes that it will get quickly rejected or dismissed while the existing, legitimate proposals against them are still being considered. This would allow them to get unfrozen quickly and withdraw their stake. Similarly, in the event a bad staker has several proposals against them, they could withdraw right after a single slashing proposal goes through.\\n```\\nfunction dismissSlashProposal(uint256 \\_proposalId, string[] calldata \\_evidence) external onlyRole(SLASHING\\_ARBITER\\_ROLE) {\\n \\_transition(\\_proposalId, DISMISSED);\\n \\_submitEvidence(\\_proposalId, DISMISSED, \\_evidence);\\n \\_returnDeposit(\\_proposalId);\\n \\_unfreeze(\\_proposalId);\\n}\\n```\\n\\n```\\nfunction rejectSlashProposal(uint256 \\_proposalId, string[] calldata \\_evidence) external onlyRole(SLASHING\\_ARBITER\\_ROLE) {\\n \\_transition(\\_proposalId, REJECTED);\\n \\_submitEvidence(\\_proposalId, REJECTED, \\_evidence);\\n \\_slashDeposit(\\_proposalId);\\n \\_unfreeze(\\_proposalId);\\n}\\n```\\n\\n```\\nfunction reviewSlashProposalParameters(\\n uint256 \\_proposalId,\\n uint8 \\_subjectType,\\n uint256 \\_subjectId,\\n bytes32 \\_penaltyId,\\n string[] calldata \\_evidence\\n) external onlyRole(SLASHING\\_ARBITER\\_ROLE) onlyInState(\\_proposalId, IN\\_REVIEW) onlyValidSlashPenaltyId(\\_penaltyId) onlyValidSubjectType(\\_subjectType) notAgencyType(\\_subjectType, SubjectStakeAgency.DELEGATOR) {\\n // No need to check for proposal existence, onlyInState will revert if \\_proposalId is in undefined state\\n if (!subjectGateway.isRegistered(\\_subjectType, \\_subjectId)) revert NonRegisteredSubject(\\_subjectType, \\_subjectId);\\n\\n \\_submitEvidence(\\_proposalId, IN\\_REVIEW, \\_evidence);\\n if (\\_subjectType != proposals[\\_proposalId].subjectType || \\_subjectId != proposals[\\_proposalId].subjectId) {\\n \\_unfreeze(\\_proposalId);\\n \\_freeze(\\_subjectType, \\_subjectId);\\n }\\n```\\n\\n```\\nfunction revertSlashProposal(uint256 \\_proposalId, string[] calldata \\_evidence) external {\\n \\_authorizeRevertSlashProposal(\\_proposalId);\\n \\_transition(\\_proposalId, REVERTED);\\n \\_submitEvidence(\\_proposalId, REVERTED, \\_evidence);\\n \\_unfreeze(\\_proposalId);\\n}\\n```\\n\\n```\\nfunction executeSlashProposal(uint256 \\_proposalId) external onlyRole(SLASHER\\_ROLE) {\\n \\_transition(\\_proposalId, EXECUTED);\\n Proposal memory proposal = proposals[\\_proposalId];\\n slashingExecutor.slash(proposal.subjectType, proposal.subjectId, getSlashedStakeValue(\\_proposalId), proposal.proposer, slashPercentToProposer);\\n slashingExecutor.freeze(proposal.subjectType, proposal.subjectId, false);\\n}\\n```\\n\\n```\\nfunction \\_unfreeze(uint256 \\_proposalId) private {\\n slashingExecutor.freeze(proposals[\\_proposalId].subjectType, proposals[\\_proposalId].subjectId, false);\\n}\\n```\\n
Introduce a check in the unfreezing mechanics to first ensure there are no other active proposals for that subject.
null
```\\nfunction dismissSlashProposal(uint256 \\_proposalId, string[] calldata \\_evidence) external onlyRole(SLASHING\\_ARBITER\\_ROLE) {\\n \\_transition(\\_proposalId, DISMISSED);\\n \\_submitEvidence(\\_proposalId, DISMISSED, \\_evidence);\\n \\_returnDeposit(\\_proposalId);\\n \\_unfreeze(\\_proposalId);\\n}\\n```\\n
Storage gap variables slightly off from the intended size
medium
The Forta staking system is using upgradeable proxies for its deployment strategy. To avoid storage collisions between contract versions during upgrades, uint256[] private `__gap` array variables are introduced that create a storage buffer. Together with contract state variables, the storage slots should sum up to 50. For example, the `__gap` variable is present in the `BaseComponentUpgradeable` component, which is the base of most Forta contracts, and there is a helpful comment in `AgentRegistryCore` that describes how its relevant `__gap` variable size was calculated:\\n```\\nuint256[50] private \\_\\_gap;\\n```\\n\\n```\\nuint256[41] private \\_\\_gap; // 50 - 1 (frontRunningDelay) - 3 (\\_stakeThreshold) - 5 StakeSubjectUpgradeable\\n```\\n\\nHowever, there are a few places where the `__gap` size was not computed correctly to get the storage slots up to 50. Some of these are:\\n```\\nuint256[49] private \\_\\_gap;\\n```\\n\\n```\\nuint256[47] private \\_\\_gap;\\n```\\n\\n```\\nuint256[44] private \\_\\_gap;\\n```\\n\\nWhile these still provide large storage buffers, it is best if the `__gap` variables are calculated to hold the same buffer within contracts of similar types as per the initial intentions to avoid confusion.\\nDuring conversations with the Forta Foundation team, it appears that some contracts like `ScannerRegistry` and `AgentRegistry` should instead add up to 45 with their `__gap` variable due to the `StakeSubject` contracts they inherit from adding 5 from themselves. This is something to note and be careful with as well for future upgrades.
Provide appropriate sizes for the `__gap` variables to have a consistent storage layout approach that would help avoid storage issues with future versions of the system.
null
```\\nuint256[50] private \\_\\_gap;\\n```\\n
AgentRegistryCore - Agent Creation DoS
medium
AgentRegistryCore allows anyone to mint an `agentID` for the desired owner address. However, in some cases, it may fall prey to DoS, either deliberately or unintentionally.\\nFor instance, let's assume the Front Running Protection is disabled or the `frontRunningDelay` is 0. It means anyone can directly create an agent without any prior commitment. Thus, anyone can observe pending transactions and try to front run them to mint an `agentID` prior to the victim's restricting it to mint a desired `agentID`.\\nAlso, it may be possible that a malicious actor succeeds in frontrunning a transaction with manipulated data/chainIDs but with the same owner address and `agentID`. There is a good chance that victim still accepts the attacker's transaction as valid, even though its own transaction reverted, due to the fact that the victim is still seeing itself as the owner of that ID.\\nTaking an instance where let's assume the frontrunning protection is enabled. Still, there is a good chance that two users vouch for the same `agentIDs` and commits in the same block, thus getting the same frontrunning delay. Then, it will be a game of luck, whoever creates that agent first will get the ID minted to its address, and the other user's transaction will be reverted wasting the time they have spent on the delay.\\nAs the `agentIDs` can be picked by users, the chances of collisions with an already minted ID will increase over time causing unnecessary reverts for others.\\nAdding to the fact that there is no restriction for owner address, anyone can spam mint any `agentID` to any address for any profitable reason.\\n```\\nfunction createAgent(uint256 agentId, address owner, string calldata metadata, uint256[] calldata chainIds)\\npublic\\n onlySorted(chainIds)\\n frontrunProtected(keccak256(abi.encodePacked(agentId, owner, metadata, chainIds)), frontRunningDelay)\\n{\\n \\_mint(owner, agentId);\\n \\_beforeAgentUpdate(agentId, metadata, chainIds);\\n \\_agentUpdate(agentId, metadata, chainIds);\\n \\_afterAgentUpdate(agentId, metadata, chainIds);\\n}\\n```\\n
Modify function `prepareAgent` to not commit an already registered `agentID`.\\nA better approach could be to allow sequential minting of `agentIDs` using some counters.\\nOnly allow users to mint an `agentID`, either for themselves or for someone they are approved to.
null
```\\nfunction createAgent(uint256 agentId, address owner, string calldata metadata, uint256[] calldata chainIds)\\npublic\\n onlySorted(chainIds)\\n frontrunProtected(keccak256(abi.encodePacked(agentId, owner, metadata, chainIds)), frontRunningDelay)\\n{\\n \\_mint(owner, agentId);\\n \\_beforeAgentUpdate(agentId, metadata, chainIds);\\n \\_agentUpdate(agentId, metadata, chainIds);\\n \\_afterAgentUpdate(agentId, metadata, chainIds);\\n}\\n```\\n
Lack of checks for rewarding an epoch that has already been rewarded
medium
To give rewards to the participating stakers, the Forta system utilizes reward epochs for each `shareId`, i.e. a delegated staking share. Each epoch gets their own reward distribution, and then `StakeAllocator` and `RewardsDistributor` contracts along with the Forta staking shares determine how much the users get.\\nTo actually allocate these rewards, a privileged account with the role `REWARDER_ROLE` calls the `RewardsDistributor.reward()` function with appropriate parameters to store the `amount` a `shareId` gets for that specific `epochNumber`, and then adds the `amount` to the `totalRewardsDistributed` contract variable for tracking. However, there is no check that the `shareId` already received rewards for that `epoch`. The new reward `amount` simply replaces the old reward `amount`, and `totalRewardsDistributed` gets the new `amount` added to it anyway. This causes inconsistencies with accounting in the `totalRewardsDistributed` variable.\\nAlthough `totalRewardsDistributed` is essentially isolated to the `sweep()` function to allow transferring out the reward tokens without taking away those tokens reserved for the reward distribution, this still creates an inconsistency, albeit a minor one in the context of the current system.\\nSimilarly, the `sweep()` function deducts the `totalRewardsDistributed` amount instead of the amount of pending rewards only. In other words, either there should be a different variable that tracks only pending rewards, or the `totalRewardsDistributed` should have token amounts deducted from it when users execute the `claimRewards()` function. Otherwise, after a few epochs there will be a really large `totalRewardsDistributed` amount that might not reflect the real amount of pending reward tokens left on the contract, and the `sweep()` function for the reward token is likely to fail for any amount being transferred out.\\n```\\nfunction reward(\\n uint8 subjectType,\\n uint256 subjectId,\\n uint256 amount,\\n uint256 epochNumber\\n) external onlyRole(REWARDER\\_ROLE) {\\n if (subjectType != NODE\\_RUNNER\\_SUBJECT) revert InvalidSubjectType(subjectType);\\n if (!\\_subjectGateway.isRegistered(subjectType, subjectId)) revert RewardingNonRegisteredSubject(subjectType, subjectId);\\n uint256 shareId = FortaStakingUtils.subjectToActive(getDelegatorSubjectType(subjectType), subjectId);\\n \\_rewardsPerEpoch[shareId][epochNumber] = amount;\\n totalRewardsDistributed += amount;\\n emit Rewarded(subjectType, subjectId, amount, epochNumber);\\n}\\n```\\n
Implement checks as appropriate to the `reward()` function to ensure correct behavior of `totalRewardsDistributed` tracking. Also, implement necessary changes to the tracking of pending rewards, if necessary.
null
```\\nfunction reward(\\n uint8 subjectType,\\n uint256 subjectId,\\n uint256 amount,\\n uint256 epochNumber\\n) external onlyRole(REWARDER\\_ROLE) {\\n if (subjectType != NODE\\_RUNNER\\_SUBJECT) revert InvalidSubjectType(subjectType);\\n if (!\\_subjectGateway.isRegistered(subjectType, subjectId)) revert RewardingNonRegisteredSubject(subjectType, subjectId);\\n uint256 shareId = FortaStakingUtils.subjectToActive(getDelegatorSubjectType(subjectType), subjectId);\\n \\_rewardsPerEpoch[shareId][epochNumber] = amount;\\n totalRewardsDistributed += amount;\\n emit Rewarded(subjectType, subjectId, amount, epochNumber);\\n}\\n```\\n
Reentrancy in FortaStaking during ERC1155 mints
medium
In the Forta staking system, the staking shares (both “active” and “inactive”) are represented as tokens implemented according to the `ERC1155` standard. The specific implementation that is being used utilizes a smart contract acceptance check `_doSafeTransferAcceptanceCheck()` upon mints to the recipient.\\n```\\ncontract FortaStaking is BaseComponentUpgradeable, ERC1155SupplyUpgradeable, SubjectTypeValidator, ISlashingExecutor, IStakeMigrator {\\n```\\n\\nThe specific implementation for `ERC1155SupplyUpgradeable` contracts can be found here, and the smart contract check can be found here.\\nThis opens up reentrancy into the system's flow. In fact, the reentrancy occurs on all mints that happen in the below functions, and it happens before a call to another Forta contract for allocation is made via either `_allocator.depositAllocation` or _allocator.withdrawAllocation:\\n```\\nfunction deposit(\\n uint8 subjectType,\\n uint256 subject,\\n uint256 stakeValue\\n) external onlyValidSubjectType(subjectType) notAgencyType(subjectType, SubjectStakeAgency.MANAGED) returns (uint256) {\\n if (address(subjectGateway) == address(0)) revert ZeroAddress("subjectGateway");\\n if (!subjectGateway.isStakeActivatedFor(subjectType, subject)) revert StakeInactiveOrSubjectNotFound();\\n address staker = \\_msgSender();\\n uint256 activeSharesId = FortaStakingUtils.subjectToActive(subjectType, subject);\\n bool reachedMax;\\n (stakeValue, reachedMax) = \\_getInboundStake(subjectType, subject, stakeValue);\\n if (reachedMax) {\\n emit MaxStakeReached(subjectType, subject);\\n }\\n uint256 sharesValue = stakeToActiveShares(activeSharesId, stakeValue);\\n SafeERC20.safeTransferFrom(stakedToken, staker, address(this), stakeValue);\\n\\n \\_activeStake.mint(activeSharesId, stakeValue);\\n \\_mint(staker, activeSharesId, sharesValue, new bytes(0));\\n emit StakeDeposited(subjectType, subject, staker, stakeValue);\\n \\_allocator.depositAllocation(activeSharesId, subjectType, subject, staker, stakeValue, sharesValue);\\n return sharesValue;\\n}\\n```\\n\\n```\\nfunction migrate(\\n uint8 oldSubjectType,\\n uint256 oldSubject,\\n uint8 newSubjectType,\\n uint256 newSubject,\\n address staker\\n) external onlyRole(SCANNER\\_2\\_NODE\\_RUNNER\\_MIGRATOR\\_ROLE) {\\n if (oldSubjectType != SCANNER\\_SUBJECT) revert InvalidSubjectType(oldSubjectType);\\n if (newSubjectType != NODE\\_RUNNER\\_SUBJECT) revert InvalidSubjectType(newSubjectType); \\n if (isFrozen(oldSubjectType, oldSubject)) revert FrozenSubject();\\n\\n uint256 oldSharesId = FortaStakingUtils.subjectToActive(oldSubjectType, oldSubject);\\n uint256 oldShares = balanceOf(staker, oldSharesId);\\n uint256 stake = activeSharesToStake(oldSharesId, oldShares);\\n uint256 newSharesId = FortaStakingUtils.subjectToActive(newSubjectType, newSubject);\\n uint256 newShares = stakeToActiveShares(newSharesId, stake);\\n\\n \\_activeStake.burn(oldSharesId, stake);\\n \\_activeStake.mint(newSharesId, stake);\\n \\_burn(staker, oldSharesId, oldShares);\\n \\_mint(staker, newSharesId, newShares, new bytes(0));\\n emit StakeDeposited(newSubjectType, newSubject, staker, stake);\\n \\_allocator.depositAllocation(newSharesId, newSubjectType, newSubject, staker, stake, newShares);\\n}\\n```\\n\\n```\\nfunction initiateWithdrawal(\\n uint8 subjectType,\\n uint256 subject,\\n uint256 sharesValue\\n) external onlyValidSubjectType(subjectType) returns (uint64) {\\n address staker = \\_msgSender();\\n uint256 activeSharesId = FortaStakingUtils.subjectToActive(subjectType, subject);\\n if (balanceOf(staker, activeSharesId) == 0) revert NoActiveShares();\\n uint64 deadline = SafeCast.toUint64(block.timestamp) + \\_withdrawalDelay;\\n\\n \\_lockingDelay[activeSharesId][staker].setDeadline(deadline);\\n\\n uint256 activeShares = Math.min(sharesValue, balanceOf(staker, activeSharesId));\\n uint256 stakeValue = activeSharesToStake(activeSharesId, activeShares);\\n uint256 inactiveShares = stakeToInactiveShares(FortaStakingUtils.activeToInactive(activeSharesId), stakeValue);\\n SubjectStakeAgency agency = getSubjectTypeAgency(subjectType);\\n \\_activeStake.burn(activeSharesId, stakeValue);\\n \\_inactiveStake.mint(FortaStakingUtils.activeToInactive(activeSharesId), stakeValue);\\n \\_burn(staker, activeSharesId, activeShares);\\n \\_mint(staker, FortaStakingUtils.activeToInactive(activeSharesId), inactiveShares, new bytes(0));\\n if (agency == SubjectStakeAgency.DELEGATED || agency == SubjectStakeAgency.DELEGATOR) {\\n \\_allocator.withdrawAllocation(activeSharesId, subjectType, subject, staker, stakeValue, activeShares);\\n }\\n```\\n\\nAlthough this doesn't seem to be an issue in the current Forta system of contracts since the allocator's logic doesn't seem to be manipulable, this could still be dangerous as it opens up an external execution flow.
Consider introducing a reentrancy check or emphasize this behavior in the documentation, so that both other projects using this system later and future upgrades along with maintenance work on the Forta staking system itself are implemented safely.
null
```\\ncontract FortaStaking is BaseComponentUpgradeable, ERC1155SupplyUpgradeable, SubjectTypeValidator, ISlashingExecutor, IStakeMigrator {\\n```\\n
Unnecessary code blocks that check the same condition
low
In the `RewardsDistributor` there is a function that allows to set delegation fees for a `NodeRunner`. It adjusts the `fees[]` array for that node as appropriate. However, during its checks, it performs the same check twice in a row.\\n```\\nif (fees[1].sinceEpoch != 0) {\\n if (Accumulators.getCurrentEpochNumber() < fees[1].sinceEpoch + delegationParamsEpochDelay) revert SetDelegationFeeNotReady();\\n}\\nif (fees[1].sinceEpoch != 0) {\\n fees[0] = fees[1];\\n}\\n```\\n
Consider refactoring this under a single code block.
null
```\\nif (fees[1].sinceEpoch != 0) {\\n if (Accumulators.getCurrentEpochNumber() < fees[1].sinceEpoch + delegationParamsEpochDelay) revert SetDelegationFeeNotReady();\\n}\\nif (fees[1].sinceEpoch != 0) {\\n fees[0] = fees[1];\\n}\\n```\\n
Event spam in RewardsDistributor.claimRewards
low
The `RewardsDistributor` contract allows users to claim their rewards through the `claimRewards()` function. It does check to see whether or not the user has already claimed the rewards for a specific epoch that they are claiming for, but it does not check to see if the user has any associated rewards at all. This could lead to event `ClaimedRewards` being spammed by malicious users, especially on low gas chains.\\n```\\nfor (uint256 i = 0; i < epochNumbers.length; i++) {\\n if (\\_claimedRewardsPerEpoch[shareId][epochNumbers[i]][\\_msgSender()]) revert AlreadyClaimed();\\n \\_claimedRewardsPerEpoch[shareId][epochNumbers[i]][\\_msgSender()] = true;\\n uint256 epochRewards = \\_availableReward(shareId, isDelegator, epochNumbers[i], \\_msgSender());\\n SafeERC20.safeTransfer(rewardsToken, \\_msgSender(), epochRewards);\\n emit ClaimedRewards(subjectType, subjectId, \\_msgSender(), epochNumbers[i], epochRewards);\\n```\\n
Add a check for rewards amounts being greater than 0.
null
```\\nfor (uint256 i = 0; i < epochNumbers.length; i++) {\\n if (\\_claimedRewardsPerEpoch[shareId][epochNumbers[i]][\\_msgSender()]) revert AlreadyClaimed();\\n \\_claimedRewardsPerEpoch[shareId][epochNumbers[i]][\\_msgSender()] = true;\\n uint256 epochRewards = \\_availableReward(shareId, isDelegator, epochNumbers[i], \\_msgSender());\\n SafeERC20.safeTransfer(rewardsToken, \\_msgSender(), epochRewards);\\n emit ClaimedRewards(subjectType, subjectId, \\_msgSender(), epochNumbers[i], epochRewards);\\n```\\n
Lack of a check for the subject's stake for reviewSlashProposalParameters
low
In the `SlashingController` contract, the address with the `SLASHING_ARBITER_ROLE` may call the `reviewSlashProposalParameters()` function to adjust the slashing proposal to a new `_subjectId` and `_subjectType`. However, unlike in the `proposeSlash()` function, there is no check for that subject having any stake at all.\\nWhile it may be assumed that the review function will be called by a privileged and knowledgeable actor, this additional check may avoid accidental mistakes.\\n```\\nif (subjectGateway.totalStakeFor(\\_subjectType, \\_subjectId) == 0) revert ZeroAmount("subject stake");\\n```\\n\\n```\\nif (\\_subjectType != proposals[\\_proposalId].subjectType || \\_subjectId != proposals[\\_proposalId].subjectId) {\\n \\_unfreeze(\\_proposalId);\\n \\_freeze(\\_subjectType, \\_subjectId);\\n}\\n```\\n
Add a check for the new subject having stake to slash.
null
```\\nif (subjectGateway.totalStakeFor(\\_subjectType, \\_subjectId) == 0) revert ZeroAmount("subject stake");\\n```\\n
Comment and code inconsistencies
low
During the audit a few inconsistencies were found between what the comments say and what the implemented code actually did.\\nSubject Type Agency for Scanner Subjects\\nIn the `SubjectTypeValidator`, the comment says that the `SCANNER_SUBJECT` is of type `DIRECT` agency type, i.e. it can be directly staked on by multiple different stakers. However, we found a difference in the implementation, where the concerned subject is defined as type `MANAGED` agency type, which says that it cannot be staked on directly; instead it's a delegated type and the allocation is supposed to be managed by its manager.\\n```\\n\\* - SCANNER\\_SUBJECT --> DIRECT\\n```\\n\\n```\\n} else if (subjectType == SCANNER\\_SUBJECT) {\\n return SubjectStakeAgency.MANAGED;\\n```\\n\\nDispatch refers to ERC721 tokens as ERC1155\\nOne of the comments describing the functionality to `link` and `unlink` agents and scanners refers to them as ERC1155 tokens, when in reality they are ERC721.\\n```\\n/\\*\\*\\n \\* @notice Assigns the job of running an agent to a scanner.\\n \\* @dev currently only allowed for DISPATCHER\\_ROLE (Assigner software).\\n \\* @dev emits Link(agentId, scannerId, true) event.\\n \\* @param agentId ERC1155 token id of the agent.\\n \\* @param scannerId ERC1155 token id of the scanner.\\n \\*/\\n```\\n\\nNodeRunnerRegistryCore comment that implies the reverse of what happens\\nA comment describing a helper function that returns address for a given scanner ID describes the opposite behavior. It is the same comment for the function just above that actually does what the comment says.\\n```\\n/// Converts scanner address to uint256 for FortaStaking Token Id.\\nfunction scannerIdToAddress(uint256 scannerId) public pure returns (address) {\\n return address(uint160(scannerId));\\n}\\n```\\n\\nScannerToNodeRunnerMigration comment that says that no NodeRunner tokens must be owned\\nFor the migration from Scanners to NodeRunners, a comment in the beginning of the file implies that for the system to work correctly, there must be no NodeRunner tokens owned prior to migration. After a conversation with the Forta Foundation team, it appears that this was an early design choice that is no longer relevant.\\n```\\n\\* @param nodeRunnerId If set as 0, a new NodeRunnerRegistry ERC721 will be minted to nodeRunner (but it must not own any prior),\\n```\\n\\n```\\n\\* @param nodeRunnerId If set as 0, a new NodeRunnerRegistry ERC721 will be minted to nodeRunner (but it must not own any prior),\\n```\\n
Verify the operational logic and fix either the concerned comments or defined logic as per the need.
null
```\\n\\* - SCANNER\\_SUBJECT --> DIRECT\\n```\\n
Oracle's _sanityCheck for prices will not work with slashing
high
The `_sanityCheck` is verifying that the new price didn't change significantly:\\n```\\nuint256 maxPrice = curPrice +\\n ((curPrice \\*\\n self.PERIOD\\_PRICE\\_INCREASE\\_LIMIT \\*\\n \\_periodsSinceUpdate) / PERCENTAGE\\_DENOMINATOR);\\n\\nuint256 minPrice = curPrice -\\n ((curPrice \\*\\n self.PERIOD\\_PRICE\\_DECREASE\\_LIMIT \\*\\n \\_periodsSinceUpdate) / PERCENTAGE\\_DENOMINATOR);\\n\\nrequire(\\n \\_newPrice >= minPrice && \\_newPrice <= maxPrice,\\n "OracleUtils: price is insane"\\n```\\n\\nWhile the rewards of staking can be reasonably predicted, the balances may also be changed due to slashing. So any slashing event should reduce the price, and if enough ETH is slashed, the price will drop heavily. The oracle will not be updated because of a sanity check. After that, there will be an arbitrage opportunity, and everyone will be incentivized to withdraw as soon as possible. That process will inevitably devaluate gETH to zero. The severity of this issue is also amplified by the fact that operators have no skin in the game and won't lose anything from slashing.
Make sure that slashing can be adequately processed when updating the price.
null
```\\nuint256 maxPrice = curPrice +\\n ((curPrice \\*\\n self.PERIOD\\_PRICE\\_INCREASE\\_LIMIT \\*\\n \\_periodsSinceUpdate) / PERCENTAGE\\_DENOMINATOR);\\n\\nuint256 minPrice = curPrice -\\n ((curPrice \\*\\n self.PERIOD\\_PRICE\\_DECREASE\\_LIMIT \\*\\n \\_periodsSinceUpdate) / PERCENTAGE\\_DENOMINATOR);\\n\\nrequire(\\n \\_newPrice >= minPrice && \\_newPrice <= maxPrice,\\n "OracleUtils: price is insane"\\n```\\n
MiniGovernance - fetchUpgradeProposal will always revert
high
In the function `fetchUpgradeProposal()`, `newProposal()` is called with a hard coded `duration` of 4 weeks. This means the function will always revert since `newProposal()` checks that the proposal `duration` is not more than the constant `MAX_PROPOSAL_DURATION` of 2 weeks. Effectively, this leaves MiniGovernance non-upgradeable.\\n```\\nGEM.newProposal(proposal.CONTROLLER, 2, proposal.NAME, 4 weeks);\\n```\\n\\n```\\nrequire(\\n duration <= MAX\\_PROPOSAL\\_DURATION,\\n "GeodeUtils: duration exceeds MAX\\_PROPOSAL\\_DURATION"\\n);\\n```\\n
Switch the hard coded proposal duration to 2 weeks.
null
```\\nGEM.newProposal(proposal.CONTROLLER, 2, proposal.NAME, 4 weeks);\\n```\\n
Updating interfaces of derivatives is done in a dangerous and unpredictable manner.
medium
Geode Finance codebase provides planet maintainers with the ability to enable or disable different contracts to act as the main token contract. In fact, multiple separate contracts can be used at the same time if decided so by the planet maintainer. Those contracts will have shared balances but will not share the allowances as you can see below:\\n```\\nmapping(uint256 => mapping(address => uint256)) private \\_balances;\\n```\\n\\n```\\nmapping(address => mapping(address => uint256)) private \\_allowances;\\n```\\n\\nUnfortunately, this approach comes with some implications that are very hard to predict as they involve interactions with other systems, but is possible to say that the consequences of those implications will most always be negative. We will not be able to outline all the implications of this issue, but we can try and outline the pattern that they all would follow.\\nThere are really two ways to update an interface: set the new one and immediately unset the old one, or have them both run in parallel for some time. Let's look at them one by one.\\nin the first case, the old interface is disabled immediately. Given that interfaces share balances that will lead to some very serious consequences. Imagine the following sequence:\\nAlice deposits her derivatives into the DWP contract for liquidity mining.\\nPlanet maintainer updates the interface and immediately disables the old one.\\nDWP contract now has the old tokens and the new ones. But only the new ones are accounted for in the storage and thus can be withdrawn. Unfortunately, the old tokens are disabled meaning that now both old and new tokens are lost.\\nThis can happen in pretty much any contract and not just the DWP token. Unless the holders had enough time to withdraw the derivatives back to their wallets all the funds deposited into contracts could be lost.\\nThis leads us to the second case where the two interfaces are active in parallel. This would solve the issue above by allowing Alice to withdraw the old tokens from the DWP and make the new tokens follow. Unfortunately, there is an issue in that case as well.\\nSome DeFi contracts allow their owners to withdraw any tokens that are not accounted for by the internal accounting. DWP allows the withdrawal of admin fees if the contract has more tokens than `balances[]` store. Some contracts even allow to withdraw funds that were accidentally sent to the contract by people. Either to recover them or just as a part of dust collection. Let's call such contracts “dangerous contracts” for our purposes.\\nAlice deposits her derivatives into the dangerous contract.\\nPlanet maintainer sets a new interface.\\nOwner of the dangerous contract sees that some odd and unaccounted tokens landed in the contract. He learns those are real and are part of Geode ecosystem. So he takes them.\\nOld tokens will follow the new tokens. That means Alice now has no claim to them and the contract that they just left has broken accounting since numbers there are not backed by tokens anymore.\\nOne other issue we would like to highlight here is that despite the contracts being expected to have separate allowances, if the old contract has the allowance set, the initial 0 value of the new one will be ignored. Here is an example:\\nAlice approves Bob for 100 derivatives.\\nPlanet maintainer sets a new interface. The new interface has no allowance from Alice to Bob.\\nBob still can transfer new tokens from Alice to himself by transferring the old tokens for which he still has the allowance. New token balances will be updated accordingly.\\nAlice could also give Bob an allowance of 100 tokens in the new contract since that was her original intent, but this would mean that Bob now has 200 token allowance.\\nThis is extremely convoluted and will most likely result in errors made by the planet maintainers when updating the interfaces.
The safest option is to only allow a list of whitelisted interfaces to be used that are well-documented and audited. Planet maintainers could then choose the once that they see fit.
null
```\\nmapping(uint256 => mapping(address => uint256)) private \\_balances;\\n```\\n
Only the GOVERNANCE can initialize the Portal
medium
In the Portal's `initialize` function, the `_GOVERNANCE` is passed as a parameter:\\n```\\nfunction initialize(\\n address \\_GOVERNANCE,\\n address \\_gETH,\\n address \\_ORACLE\\_POSITION,\\n address \\_DEFAULT\\_gETH\\_INTERFACE,\\n address \\_DEFAULT\\_DWP,\\n address \\_DEFAULT\\_LP\\_TOKEN,\\n address \\_MINI\\_GOVERNANCE\\_POSITION,\\n uint256 \\_GOVERNANCE\\_TAX,\\n uint256 \\_COMET\\_TAX,\\n uint256 \\_MAX\\_MAINTAINER\\_FEE,\\n uint256 \\_BOOSTRAP\\_PERIOD\\n) public virtual override initializer {\\n \\_\\_ReentrancyGuard\\_init();\\n \\_\\_Pausable\\_init();\\n \\_\\_ERC1155Holder\\_init();\\n \\_\\_UUPSUpgradeable\\_init();\\n\\n GEODE.SENATE = \\_GOVERNANCE;\\n GEODE.GOVERNANCE = \\_GOVERNANCE;\\n GEODE.GOVERNANCE\\_TAX = \\_GOVERNANCE\\_TAX;\\n GEODE.MAX\\_GOVERNANCE\\_TAX = \\_GOVERNANCE\\_TAX;\\n GEODE.SENATE\\_EXPIRY = type(uint256).max;\\n\\n STAKEPOOL.GOVERNANCE = \\_GOVERNANCE;\\n STAKEPOOL.gETH = IgETH(\\_gETH);\\n STAKEPOOL.TELESCOPE.gETH = IgETH(\\_gETH);\\n STAKEPOOL.TELESCOPE.ORACLE\\_POSITION = \\_ORACLE\\_POSITION;\\n STAKEPOOL.TELESCOPE.MONOPOLY\\_THRESHOLD = 20000;\\n\\n updateStakingParams(\\n \\_DEFAULT\\_gETH\\_INTERFACE,\\n \\_DEFAULT\\_DWP,\\n \\_DEFAULT\\_LP\\_TOKEN,\\n \\_MAX\\_MAINTAINER\\_FEE,\\n \\_BOOSTRAP\\_PERIOD,\\n type(uint256).max,\\n type(uint256).max,\\n \\_COMET\\_TAX,\\n 3 days\\n );\\n```\\n\\nBut then it calls the `updateStakingParams` function, which requires the `msg.sender` to be the governance:\\n```\\nfunction updateStakingParams(\\n address \\_DEFAULT\\_gETH\\_INTERFACE,\\n address \\_DEFAULT\\_DWP,\\n address \\_DEFAULT\\_LP\\_TOKEN,\\n uint256 \\_MAX\\_MAINTAINER\\_FEE,\\n uint256 \\_BOOSTRAP\\_PERIOD,\\n uint256 \\_PERIOD\\_PRICE\\_INCREASE\\_LIMIT,\\n uint256 \\_PERIOD\\_PRICE\\_DECREASE\\_LIMIT,\\n uint256 \\_COMET\\_TAX,\\n uint256 \\_BOOST\\_SWITCH\\_LATENCY\\n) public virtual override {\\n require(\\n msg.sender == GEODE.GOVERNANCE,\\n "Portal: sender not GOVERNANCE"\\n );\\n```\\n\\nSo only the future governance can initialize the `Portal`. In the case of the Geode protocol, the governance will be represented by a token contract, making it hard to initialize promptly. Initialization should be done by an actor that is more flexible than governance.
Split the `updateStakingParams` function into public and private ones and use them accordingly.
null
```\\nfunction initialize(\\n address \\_GOVERNANCE,\\n address \\_gETH,\\n address \\_ORACLE\\_POSITION,\\n address \\_DEFAULT\\_gETH\\_INTERFACE,\\n address \\_DEFAULT\\_DWP,\\n address \\_DEFAULT\\_LP\\_TOKEN,\\n address \\_MINI\\_GOVERNANCE\\_POSITION,\\n uint256 \\_GOVERNANCE\\_TAX,\\n uint256 \\_COMET\\_TAX,\\n uint256 \\_MAX\\_MAINTAINER\\_FEE,\\n uint256 \\_BOOSTRAP\\_PERIOD\\n) public virtual override initializer {\\n \\_\\_ReentrancyGuard\\_init();\\n \\_\\_Pausable\\_init();\\n \\_\\_ERC1155Holder\\_init();\\n \\_\\_UUPSUpgradeable\\_init();\\n\\n GEODE.SENATE = \\_GOVERNANCE;\\n GEODE.GOVERNANCE = \\_GOVERNANCE;\\n GEODE.GOVERNANCE\\_TAX = \\_GOVERNANCE\\_TAX;\\n GEODE.MAX\\_GOVERNANCE\\_TAX = \\_GOVERNANCE\\_TAX;\\n GEODE.SENATE\\_EXPIRY = type(uint256).max;\\n\\n STAKEPOOL.GOVERNANCE = \\_GOVERNANCE;\\n STAKEPOOL.gETH = IgETH(\\_gETH);\\n STAKEPOOL.TELESCOPE.gETH = IgETH(\\_gETH);\\n STAKEPOOL.TELESCOPE.ORACLE\\_POSITION = \\_ORACLE\\_POSITION;\\n STAKEPOOL.TELESCOPE.MONOPOLY\\_THRESHOLD = 20000;\\n\\n updateStakingParams(\\n \\_DEFAULT\\_gETH\\_INTERFACE,\\n \\_DEFAULT\\_DWP,\\n \\_DEFAULT\\_LP\\_TOKEN,\\n \\_MAX\\_MAINTAINER\\_FEE,\\n \\_BOOSTRAP\\_PERIOD,\\n type(uint256).max,\\n type(uint256).max,\\n \\_COMET\\_TAX,\\n 3 days\\n );\\n```\\n
The maintainer of the MiniGovernance can block the changeMaintainer function
medium
Every entity with an ID has a controller and a maintainer. The controller tends to have more control, and the maintainer is mostly used for operational purposes. So the controller should be able to change the maintainer if that is required. Indeed we see that it is possible in the MiniGovernance too:\\n```\\nfunction changeMaintainer(\\n bytes calldata password,\\n bytes32 newPasswordHash,\\n address newMaintainer\\n)\\n external\\n virtual\\n override\\n onlyPortal\\n whenNotPaused\\n returns (bool success)\\n{\\n require(\\n SELF.PASSWORD\\_HASH == bytes32(0) ||\\n SELF.PASSWORD\\_HASH ==\\n keccak256(abi.encodePacked(SELF.ID, password))\\n );\\n SELF.PASSWORD\\_HASH = newPasswordHash;\\n\\n \\_refreshSenate(newMaintainer);\\n\\n success = true;\\n}\\n```\\n\\nHere the `changeMaintainer` function can only be called by the Portal, and only the controller can initiate that call. But the maintainer can pause the MiniGovernance, which will make this call revert because the `_refreshSenate` function has the `whenNotPaused` modifier. Thus maintainer could intentionally prevent the controller from replacing it by another maintainer.
Make sure that the controller can always change the malicious maintainer.
null
```\\nfunction changeMaintainer(\\n bytes calldata password,\\n bytes32 newPasswordHash,\\n address newMaintainer\\n)\\n external\\n virtual\\n override\\n onlyPortal\\n whenNotPaused\\n returns (bool success)\\n{\\n require(\\n SELF.PASSWORD\\_HASH == bytes32(0) ||\\n SELF.PASSWORD\\_HASH ==\\n keccak256(abi.encodePacked(SELF.ID, password))\\n );\\n SELF.PASSWORD\\_HASH = newPasswordHash;\\n\\n \\_refreshSenate(newMaintainer);\\n\\n success = true;\\n}\\n```\\n
Entities are not required to be initiated
medium
Every entity (Planet, Comet, Operator) has a 3-step creation process:\\nCreation of the proposal.\\nApproval of the proposal.\\nInitiation of the entity.\\nThe last step is crucial, but it is never explicitly checked that the entity is initialized. The initiation always includes the `initiator` modifier that works with the `"initiated"` slot on DATASTORE:\\n```\\nmodifier initiator(\\n DataStoreUtils.DataStore storage DATASTORE,\\n uint256 \\_TYPE,\\n uint256 \\_id,\\n address \\_maintainer\\n) {\\n require(\\n msg.sender == DATASTORE.readAddressForId(\\_id, "CONTROLLER"),\\n "MaintainerUtils: sender NOT CONTROLLER"\\n );\\n require(\\n DATASTORE.readUintForId(\\_id, "TYPE") == \\_TYPE,\\n "MaintainerUtils: id NOT correct TYPE"\\n );\\n require(\\n DATASTORE.readUintForId(\\_id, "initiated") == 0,\\n "MaintainerUtils: already initiated"\\n );\\n\\n DATASTORE.writeAddressForId(\\_id, "maintainer", \\_maintainer);\\n\\n \\_;\\n\\n DATASTORE.writeUintForId(\\_id, "initiated", block.timestamp);\\n\\n emit IdInitiated(\\_id, \\_TYPE);\\n}\\n```\\n\\nBut this slot is never actually checked when the entities are used. While we did not find any profitable attack vector using uninitiated entities, the code will be upgraded, which may allow for possible attack vectors related to this issue.
Make sure the entities are initiated before they are used.
null
```\\nmodifier initiator(\\n DataStoreUtils.DataStore storage DATASTORE,\\n uint256 \\_TYPE,\\n uint256 \\_id,\\n address \\_maintainer\\n) {\\n require(\\n msg.sender == DATASTORE.readAddressForId(\\_id, "CONTROLLER"),\\n "MaintainerUtils: sender NOT CONTROLLER"\\n );\\n require(\\n DATASTORE.readUintForId(\\_id, "TYPE") == \\_TYPE,\\n "MaintainerUtils: id NOT correct TYPE"\\n );\\n require(\\n DATASTORE.readUintForId(\\_id, "initiated") == 0,\\n "MaintainerUtils: already initiated"\\n );\\n\\n DATASTORE.writeAddressForId(\\_id, "maintainer", \\_maintainer);\\n\\n \\_;\\n\\n DATASTORE.writeUintForId(\\_id, "initiated", block.timestamp);\\n\\n emit IdInitiated(\\_id, \\_TYPE);\\n}\\n```\\n
The blameOperator can be called for an alienated validator
medium
The `blameOperator` function is designed to be called by anyone. If some operator did not signal to exit in time, anyone can blame and imprison this operator.\\n```\\n/\\*\\*\\n \\* @notice allows improsening an Operator if the validator have not been exited until expectedExit\\n \\* @dev anyone can call this function\\n \\* @dev if operator has given enough allowence, they can rotate the validators to avoid being prisoned\\n \\*/\\nfunction blameOperator(\\n StakePool storage self,\\n DataStoreUtils.DataStore storage DATASTORE,\\n bytes calldata pk\\n) external {\\n if (\\n block.timestamp > self.TELESCOPE.\\_validators[pk].expectedExit &&\\n self.TELESCOPE.\\_validators[pk].state != 3\\n ) {\\n OracleUtils.imprison(\\n DATASTORE,\\n self.TELESCOPE.\\_validators[pk].operatorId\\n );\\n }\\n}\\n```\\n\\nThe problem is that it can be called for any state that is not `3` (self.TELESCOPE._validators[pk].state != 3). But it should only be called for active validators whose state equals `2`. So the `blameOperator` can be called an infinite amount of time for alienated or not approved validators. These types of validators cannot switch to state `3`.\\nThe severity of the issue is mitigated by the fact that this function is currently unavailable for users to call. But it is intended to be external once the withdrawal process is in place.
Make sure that you can only blame the operator of an active validator.
null
```\\n/\\*\\*\\n \\* @notice allows improsening an Operator if the validator have not been exited until expectedExit\\n \\* @dev anyone can call this function\\n \\* @dev if operator has given enough allowence, they can rotate the validators to avoid being prisoned\\n \\*/\\nfunction blameOperator(\\n StakePool storage self,\\n DataStoreUtils.DataStore storage DATASTORE,\\n bytes calldata pk\\n) external {\\n if (\\n block.timestamp > self.TELESCOPE.\\_validators[pk].expectedExit &&\\n self.TELESCOPE.\\_validators[pk].state != 3\\n ) {\\n OracleUtils.imprison(\\n DATASTORE,\\n self.TELESCOPE.\\_validators[pk].operatorId\\n );\\n }\\n}\\n```\\n
Latency timelocks on certain functions can be bypassed
medium
The functions `switchMaintainerFee()` and `switchWithdrawalBoost()` add a latency of typically three days to the current timestamp at which the new value is meant to be valid. However, they don't limit the number of times this value can be changed within the latency period. This allows a malicious maintainer to set their desired value twice and effectively make the change immediately. Let's take the first function as an example. The first call to it sets a value as the `newFee`, moving the old value to `priorFee`, which is effectively the fee in use until the time lock is up. A follow-up call to the function with the same value as a parameter would mean the “new” value overwrites the old `priorFee` while remaining in the queue for the switch.\\n```\\nfunction switchMaintainerFee(\\n DataStoreUtils.DataStore storage DATASTORE,\\n uint256 id,\\n uint256 newFee\\n) external {\\n DATASTORE.writeUintForId(\\n id,\\n "priorFee",\\n DATASTORE.readUintForId(id, "fee")\\n );\\n DATASTORE.writeUintForId(\\n id,\\n "feeSwitch",\\n block.timestamp + FEE\\_SWITCH\\_LATENCY\\n );\\n DATASTORE.writeUintForId(id, "fee", newFee);\\n\\n emit MaintainerFeeSwitched(\\n id,\\n newFee,\\n block.timestamp + FEE\\_SWITCH\\_LATENCY\\n );\\n}\\n```\\n\\n```\\nfunction getMaintainerFee(\\n DataStoreUtils.DataStore storage DATASTORE,\\n uint256 id\\n) internal view returns (uint256 fee) {\\n if (DATASTORE.readUintForId(id, "feeSwitch") > block.timestamp) {\\n return DATASTORE.readUintForId(id, "priorFee");\\n }\\n return DATASTORE.readUintForId(id, "fee");\\n}\\n```\\n
Add a check to make sure only one value can be set between time lock periods.
null
```\\nfunction switchMaintainerFee(\\n DataStoreUtils.DataStore storage DATASTORE,\\n uint256 id,\\n uint256 newFee\\n) external {\\n DATASTORE.writeUintForId(\\n id,\\n "priorFee",\\n DATASTORE.readUintForId(id, "fee")\\n );\\n DATASTORE.writeUintForId(\\n id,\\n "feeSwitch",\\n block.timestamp + FEE\\_SWITCH\\_LATENCY\\n );\\n DATASTORE.writeUintForId(id, "fee", newFee);\\n\\n emit MaintainerFeeSwitched(\\n id,\\n newFee,\\n block.timestamp + FEE\\_SWITCH\\_LATENCY\\n );\\n}\\n```\\n
MiniGovernance's senate has almost unlimited validity
medium
A new senate for the MiniGovernance contract is set in the following line:\\n```\\nGEM.\\_setSenate(newSenate, block.timestamp + SENATE\\_VALIDITY);\\n```\\n\\nThe validity period argument should not include `block.timestamp`, because it is going to be added a bit later in the code:\\n```\\nself.SENATE\\_EXPIRY = block.timestamp + \\_senatePeriod;\\n```\\n\\nSo currently, every senate of MiniGovernance will have much longer validity than it is supposed to.
Pass onlySENATE_VALIDITY in the `_refreshSenate` function.
null
```\\nGEM.\\_setSenate(newSenate, block.timestamp + SENATE\\_VALIDITY);\\n```\\n
Proposed validators not accounted for in the monopoly check.
medium
The Geode team introduced a check that makes sure that node operators do not initiate more validators than a threshold called `MONOPOLY_THRESHOLD` allows. It is used on call to `proposeStake(...)` which the operator would call in order to propose new validators. It is worth mentioning that onboarding new validator nodes requires 2 steps: a proposal from the node operator and approval from the planet maintainer. After the first step validators get a status of `proposed`. After the second step validators get the status of `active` and all eth accounting is done. The issue we found is that the `proposed` validators step performs the monopoly check but does not account for previously `proposed` but not `active` validators.\\nAssume that `MONOPOLY_THRESHOLD` is set to 5. The node operator could propose 4 new validators and pass the monopoly check and label those validators as `proposed`. The node operator could then suggest 4 more validators in a separate transaction and since the monopoly check does not check for the `proposed` validators, that would pass as well. Then in `beaconStake` or the step of maintainer approval, there is no monopoly check at all, so 8 validators could be activated at once.\\n```\\nrequire(\\n (DATASTORE.readUintForId(operatorId, "totalActiveValidators") +\\n pubkeys.length) <= self.TELESCOPE.MONOPOLY\\_THRESHOLD,\\n "StakeUtils: IceBear does NOT like monopolies"\\n);\\n```\\n
Include the `(DATASTORE.readUintForId(poolId,DataStoreUtils.getKey(operatorId, "proposedValidators"))` into the require statement, just like in the check for the node operator allowance check.\\n```\\nrequire(\\n (DATASTORE.readUintForId(\\n poolId,\\n DataStoreUtils.getKey(operatorId, "proposedValidators")\\n ) +\\n DATASTORE.readUintForId(\\n poolId,\\n DataStoreUtils.getKey(operatorId, "activeValidators")\\n ) +\\n pubkeys.length) <=\\n operatorAllowance(DATASTORE, poolId, operatorId),\\n "StakeUtils: NOT enough allowance"\\n);\\n```\\n
null
```\\nrequire(\\n (DATASTORE.readUintForId(operatorId, "totalActiveValidators") +\\n pubkeys.length) <= self.TELESCOPE.MONOPOLY\\_THRESHOLD,\\n "StakeUtils: IceBear does NOT like monopolies"\\n);\\n```\\n
Comparison operator used instead of assignment operator
medium
```\\nself.\\_validators[\\_pk].state == 2;\\n```\\n\\n```\\nself.\\_validators[\\_pk].state == 3;\\n```\\n
Replace `==` with `=`.
null
```\\nself.\\_validators[\\_pk].state == 2;\\n```\\n
initiator modifier will not work in the context of one transaction
low
Each planet, comet or operator must be initialized after the onboarding proposal is approved. In order to make sure that these entities are not initialized more than once `initiateOperator`, `initiateComet` and `initiatePlanet` have the `initiator` modifier.\\n```\\nfunction initiatePlanet(\\n DataStoreUtils.DataStore storage DATASTORE,\\n uint256[3] memory uintSpecs,\\n address[5] memory addressSpecs,\\n string[2] calldata interfaceSpecs\\n)\\n external\\n initiator(DATASTORE, 5, uintSpecs[0], addressSpecs[1])\\n returns (\\n address miniGovernance,\\n address gInterface,\\n address withdrawalPool\\n )\\n```\\n\\n```\\nfunction initiateComet(\\n DataStoreUtils.DataStore storage DATASTORE,\\n uint256 id,\\n uint256 fee,\\n address maintainer\\n) external initiator(DATASTORE, 6, id, maintainer) {\\n```\\n\\n```\\nfunction initiateOperator(\\n DataStoreUtils.DataStore storage DATASTORE,\\n uint256 id,\\n uint256 fee,\\n address maintainer\\n) external initiator(DATASTORE, 4, id, maintainer) {\\n```\\n\\nInside that modifier, we check that the `initiated` flag is 0 and if so we proceed to initialization. We later update it to the current timestamp.\\n```\\nmodifier initiator(\\n DataStoreUtils.DataStore storage DATASTORE,\\n uint256 \\_TYPE,\\n uint256 \\_id,\\n address \\_maintainer\\n) {\\n require(\\n msg.sender == DATASTORE.readAddressForId(\\_id, "CONTROLLER"),\\n "MaintainerUtils: sender NOT CONTROLLER"\\n );\\n require(\\n DATASTORE.readUintForId(\\_id, "TYPE") == \\_TYPE,\\n "MaintainerUtils: id NOT correct TYPE"\\n );\\n require(\\n DATASTORE.readUintForId(\\_id, "initiated") == 0,\\n "MaintainerUtils: already initiated"\\n );\\n\\n DATASTORE.writeAddressForId(\\_id, "maintainer", \\_maintainer);\\n\\n \\_;\\n\\n DATASTORE.writeUintForId(\\_id, "initiated", block.timestamp);\\n\\n emit IdInitiated(\\_id, \\_TYPE);\\n}\\n```\\n\\nUnfortunately, this does not follow the checks-effects-interractions pattern. If one for example would call `initiatePlanet` again from the body of the modifier, this check will still pass making it susceptible to a reentrancy attack. While we could not find a way to exploit this in the current engagement, given that system is designed to be upgradable this could become a risk in the future. For example, if during the initialization of the planet the maintainer will be allowed to pass a custom interface that could potentially allow reentering.
Bring the line that updated the `initiated` flag to the current timestamp before the `_;`.\\n```\\nDATASTORE.writeUintForId(\\_id, "initiated", block.timestamp);\\n```\\n
null
```\\nfunction initiatePlanet(\\n DataStoreUtils.DataStore storage DATASTORE,\\n uint256[3] memory uintSpecs,\\n address[5] memory addressSpecs,\\n string[2] calldata interfaceSpecs\\n)\\n external\\n initiator(DATASTORE, 5, uintSpecs[0], addressSpecs[1])\\n returns (\\n address miniGovernance,\\n address gInterface,\\n address withdrawalPool\\n )\\n```\\n
Incorrect accounting for the burned gEth
low
Geode Portal records the amount of minted and burned gETH on any given day during the active period of the oracle. One case where some gETH is burned is when the users redeem gETH for ETH. In the burn function we burn the spentGeth - `gEthDonation` but in the accounting code we do not account for `gEthDonation` so the code records more assets burned than was really burned.\\n```\\nDATASTORE.subUintForId(poolId, "surplus", spentSurplus);\\nself.gETH.burn(address(this), poolId, spentGeth - gEthDonation);\\n\\nif (self.TELESCOPE.\\_isOracleActive()) {\\n bytes32 dailyBufferKey = DataStoreUtils.getKey(\\n block.timestamp - (block.timestamp % OracleUtils.ORACLE\\_PERIOD),\\n "burnBuffer"\\n );\\n DATASTORE.addUintForId(poolId, dailyBufferKey, spentGeth);\\n}\\n```\\n
Record the `spentGeth` - gEthDonation instead of just `spentGeth` in the burn buffer.\\n```\\nDATASTORE.addUintForId(poolId, dailyBufferKey, spentGeth);\\n```\\n
null
```\\nDATASTORE.subUintForId(poolId, "surplus", spentSurplus);\\nself.gETH.burn(address(this), poolId, spentGeth - gEthDonation);\\n\\nif (self.TELESCOPE.\\_isOracleActive()) {\\n bytes32 dailyBufferKey = DataStoreUtils.getKey(\\n block.timestamp - (block.timestamp % OracleUtils.ORACLE\\_PERIOD),\\n "burnBuffer"\\n );\\n DATASTORE.addUintForId(poolId, dailyBufferKey, spentGeth);\\n}\\n```\\n
Boost calculation on fetchUnstake should not be using the cumBalance when it is larger than debt.
low
The Geode team implemented the 2-step withdrawal mechanism for the staked ETH. First, node operators signal their intent to withdraw the stake, and then the oracle will trigger all of the accounting of rewards, balances, and buybacks if necessary. Buybacks are what we are interested in at this time. Buybacks are performed by checking if the derivative asset is off peg in the Dynamic Withdrawal Pool contract. Once the debt is larger than some ignorable threshold an arbitrage buyback will be executed. A portion of the arbitrage profit will go to the node operator. The issue here is that when simulating the arbitrage swap in the `calculateSwap` call we use the cumulative un-stake balance rather than ETH debt preset in the DWP. In the case where the withdrawal cumulative balance is higher than the debt node operator will receive a higher reward than intended.\\n```\\nuint256 arb = withdrawalPoolById(DATASTORE, poolId)\\n .calculateSwap(0, 1, cumBal);\\n```\\n
Use the `debt` amount of ETH in the boost reward calculation when the cumulative balance is larger than the `debt`.
null
```\\nuint256 arb = withdrawalPoolById(DATASTORE, poolId)\\n .calculateSwap(0, 1, cumBal);\\n```\\n
DataStore struct not having the _gap for upgrades.
low
```\\nDataStoreUtils.DataStore private DATASTORE;\\nGeodeUtils.Universe private GEODE;\\nStakeUtils.StakePool private STAKEPOOL;\\n```\\n\\nIt is worth mentioning that Geode contracts are meant to support the upgradability pattern. Given that information, one should be careful not to overwrite the storage variables by reordering the old ones or adding the new once not at the end of the list of variables when upgrading. The issue comes with the fact that structs seem to give a false sense of security making it feel like they are an isolated set of storage variables that will not override anything else. In reality, struts are just tuples that are expanded in storage sequentially just like all the other storage variables. For that reason, if you have two struct storage variables listed back to back like in the code above, you either need to make sure not to change the order or the number of variables in the structs other than the last one between upgrades or you need to add a `uint256[N] _gap` array of fixed size to reserve some storage slots for the future at the end of each struct. The Geode Finance team is missing the gap in the `DataStrore` struct making it non-upgradable.\\n```\\nstruct DataStore {\\n mapping(uint256 => uint256[]) allIdsByType;\\n mapping(bytes32 => uint256) uintData;\\n mapping(bytes32 => bytes) bytesData;\\n mapping(bytes32 => address) addressData;\\n}\\n```\\n
We suggest that gap is used in DataStore as well. Since it was used for all the other structs we consider it just a typo.
null
```\\nDataStoreUtils.DataStore private DATASTORE;\\nGeodeUtils.Universe private GEODE;\\nStakeUtils.StakePool private STAKEPOOL;\\n```\\n
Handle division by 0
medium
There are a few places in the code where division by zero may occur but isn't handled.\\nIf the vault settles at exactly 0 value with 0 remaining strategy token value, there may be an unhandled division by zero trying to divide claims on the settled assets:\\n```\\nint256 settledVaultValue = settlementRate.convertToUnderlying(residualAssetCashBalance)\\n .add(totalStrategyTokenValueAtSettlement);\\n\\n// If the vault is insolvent (meaning residualAssetCashBalance < 0), it is necessarily\\n// true that totalStrategyTokens == 0 (meaning all tokens were sold in an attempt to\\n// repay the debt). That means settledVaultValue == residualAssetCashBalance, strategyTokenClaim == 0\\n// and assetCashClaim == totalAccountValue. Accounts that are still solvent will be paid from the\\n// reserve, accounts that are insolvent will have a totalAccountValue == 0.\\nstrategyTokenClaim = totalAccountValue.mul(vaultState.totalStrategyTokens.toInt())\\n .div(settledVaultValue).toUint();\\n\\nassetCashClaim = totalAccountValue.mul(residualAssetCashBalance)\\n .div(settledVaultValue);\\n```\\n\\nIf a vault account is entirely insolvent and its `vaultShareValue` is zero, there will be an unhandled division by zero during liquidation:\\n```\\nuint256 vaultSharesToLiquidator;\\n{\\n vaultSharesToLiquidator = vaultAccount.tempCashBalance.toUint()\\n .mul(vaultConfig.liquidationRate.toUint())\\n .mul(vaultAccount.vaultShares)\\n .div(vaultShareValue.toUint())\\n .div(uint256(Constants.RATE\\_PRECISION));\\n}\\n```\\n\\nIf a vault account's secondary debt is being repaid when there is none, there will be an unhandled division by zero:\\n```\\nVaultSecondaryBorrowStorage storage balance =\\n LibStorage.getVaultSecondaryBorrow()[vaultConfig.vault][maturity][currencyId];\\nuint256 totalfCashBorrowed = balance.totalfCashBorrowed;\\nuint256 totalAccountDebtShares = balance.totalAccountDebtShares;\\n\\nfCashToLend = debtSharesToRepay.mul(totalfCashBorrowed).div(totalAccountDebtShares).toInt();\\n```\\n\\nWhile these cases may be unlikely today, this code could be reutilized in other circumstances later that could cause reverts and even disrupt operations more frequently.
Handle the cases where the denominator could be zero appropriately.
null
```\\nint256 settledVaultValue = settlementRate.convertToUnderlying(residualAssetCashBalance)\\n .add(totalStrategyTokenValueAtSettlement);\\n\\n// If the vault is insolvent (meaning residualAssetCashBalance < 0), it is necessarily\\n// true that totalStrategyTokens == 0 (meaning all tokens were sold in an attempt to\\n// repay the debt). That means settledVaultValue == residualAssetCashBalance, strategyTokenClaim == 0\\n// and assetCashClaim == totalAccountValue. Accounts that are still solvent will be paid from the\\n// reserve, accounts that are insolvent will have a totalAccountValue == 0.\\nstrategyTokenClaim = totalAccountValue.mul(vaultState.totalStrategyTokens.toInt())\\n .div(settledVaultValue).toUint();\\n\\nassetCashClaim = totalAccountValue.mul(residualAssetCashBalance)\\n .div(settledVaultValue);\\n```\\n
Increasing a leveraged position in a vault with secondary borrow currency will revert
low
From the client's specifications for the strategy vaults, we know that accounts should be able to increase their leveraged positions before maturity. This property will not hold for the vaults that require borrowing a secondary currency to enter a position. When an account opens its position in such vault for the first time, the `VaultAccountSecondaryDebtShareStorage.maturity` is set to the maturity an account has entered. When the account is trying to increase the debt position, an accounts current maturity will be checked, and since it is not set to 0, as in the case where an account enters the vault for the first time, nor it is smaller than the new maturity passed by an account as in case of a rollover, the code will revert.\\n```\\nif (accountMaturity != 0) {\\n // Cannot roll to a shorter term maturity\\n require(accountMaturity < maturity);\\n```\\n
In order to fix this issue, we recommend that `<` is replaced with `<=` so that account can enter the vault maturity the account is already in as well as the future once.
null
```\\nif (accountMaturity != 0) {\\n // Cannot roll to a shorter term maturity\\n require(accountMaturity < maturity);\\n```\\n
Secondary Currency debt is not managed by the Notional Controller
low
Some of the Notional Strategy Vaults may allow for secondary currencies to be borrowed as part of the same strategy. For example, a strategy may allow for USDC to be its primary borrow currency as well as have ETH as its secondary borrow currency.\\nIn order to enter the vault, a user would have to deposit `depositAmountExternal` of the primary borrow currency when calling `VaultAccountAction.enterVault()`. This would allow the user to borrow with leverage, as long as the `vaultConfig.checkCollateralRatio()` check on that account succeeds, which is based on the initial deposit and borrow currency amounts. This collateral ratio check is then performed throughout that user account's lifecycle in that vault, such as when they try to roll their maturity, or when liquidators try to perform collateral checks to ensure there is no bad debt.\\nHowever, in the event that the vault has a secondary borrow currency as well, that additional secondary debt is not calculated as part of the `checkCollateralRatio()` check. The only debt that is being considered is the `vaultAccount.fCash` that corresponds to the primary borrow currency debt:\\n```\\nfunction checkCollateralRatio(\\n VaultConfig memory vaultConfig,\\n VaultState memory vaultState,\\n VaultAccount memory vaultAccount\\n) internal view {\\n (int256 collateralRatio, /\\* \\*/) = calculateCollateralRatio(\\n vaultConfig, vaultState, vaultAccount.account, vaultAccount.vaultShares, vaultAccount.fCash\\n```\\n\\n```\\nfunction calculateCollateralRatio(\\n VaultConfig memory vaultConfig,\\n VaultState memory vaultState,\\n address account,\\n uint256 vaultShares,\\n int256 fCash\\n) internal view returns (int256 collateralRatio, int256 vaultShareValue) {\\n vaultShareValue = vaultState.getCashValueOfShare(vaultConfig, account, vaultShares);\\n\\n // We do not discount fCash to present value so that we do not introduce interest\\n // rate risk in this calculation. The economic benefit of discounting will be very\\n // minor relative to the added complexity of accounting for interest rate risk.\\n\\n // Convert fCash to a positive amount of asset cash\\n int256 debtOutstanding = vaultConfig.assetRate.convertFromUnderlying(fCash.neg());\\n```\\n\\nWhereas the value of strategy tokens that belong to that user account are being calculated by calling `IStrategyVault(vault).convertStrategyToUnderlying()` on the associated strategy vault:\\n```\\nfunction getCashValueOfShare(\\n VaultState memory vaultState,\\n VaultConfig memory vaultConfig,\\n address account,\\n uint256 vaultShares\\n) internal view returns (int256 assetCashValue) {\\n if (vaultShares == 0) return 0;\\n (uint256 assetCash, uint256 strategyTokens) = getPoolShare(vaultState, vaultShares);\\n int256 underlyingInternalStrategyTokenValue = \\_getStrategyTokenValueUnderlyingInternal(\\n vaultConfig.borrowCurrencyId, vaultConfig.vault, account, strategyTokens, vaultState.maturity\\n );\\n```\\n\\n```\\nfunction \\_getStrategyTokenValueUnderlyingInternal(\\n uint16 currencyId,\\n address vault,\\n address account,\\n uint256 strategyTokens,\\n uint256 maturity\\n) private view returns (int256) {\\n Token memory token = TokenHandler.getUnderlyingToken(currencyId);\\n // This will be true if the the token is "NonMintable" meaning that it does not have\\n // an underlying token, only an asset token\\n if (token.decimals == 0) token = TokenHandler.getAssetToken(currencyId);\\n\\n return token.convertToInternal(\\n IStrategyVault(vault).convertStrategyToUnderlying(account, strategyTokens, maturity)\\n );\\n}\\n```\\n\\nFrom conversations with the Notional team, it is assumed that this call returns the strategy token value subtracted against the secondary currencies debt, as is the case in the `Balancer2TokenVault` for example. In other words, when collateral ratio checks are performed, those strategy vaults that utilize secondary currency borrows would need to calculate the value of strategy tokens already accounting for any secondary debt. However, this is a dependency for a critical piece of the Notional controller's strategy vaults collateral checks.\\nTherefore, even though the strategy vaults' code and logic would be vetted before their whitelisting into the Notional system, they would still remain an external dependency with relatively arbitrary code responsible for the liquidation infrastructure that could lead to bad debt or incorrect liquidations if the vaults give inaccurate information, and thus potential loss of funds.
Specific strategy vault implementations using secondary borrows were not in scope of this audit. However, since the core Notional Vault system was, and it includes secondary borrow currency functionality, from the point of view of the larger Notional system it is recommended to include secondary debt checks within the Notional controller contract to reduce external dependency on the strategy vaults' logic.
null
```\\nfunction checkCollateralRatio(\\n VaultConfig memory vaultConfig,\\n VaultState memory vaultState,\\n VaultAccount memory vaultAccount\\n) internal view {\\n (int256 collateralRatio, /\\* \\*/) = calculateCollateralRatio(\\n vaultConfig, vaultState, vaultAccount.account, vaultAccount.vaultShares, vaultAccount.fCash\\n```\\n
Vaults are unable to borrow single secondary currency
low
As was previously mentioned some strategies `require` borrowing one or two secondary currencies. All secondary currencies have to be whitelisted in the `VaultConfig.secondaryBorrowCurrencies`. Borrow operation on secondary currencies is performed in the `borrowSecondaryCurrencyToVault(...)` function. Due to a `require` statement in that function, vaults will only be able to borrow secondary currencies if both of the currencies are whitelisted in `VaultConfig.secondaryBorrowCurrencies`. Considering that many strategies will have just one secondary currency, this will prevent those strategies from borrowing any secondary assets.\\n```\\nrequire(currencies[0] != 0 && currencies[1] != 0);\\n```\\n
We suggest that the `&&` operator is replaced by the `||` operator. Ideally, an additional check will be performed that will ensure that values in argument arrays `fCashToBorrow`, `maxBorrowRate`, and `minRollLendRate` are passed under the same index as the whitelisted currencies in `VaultConfig.secondaryBorrowCurrencies`.\\n```\\nfunction borrowSecondaryCurrencyToVault(\\n address account,\\n uint256 maturity,\\n uint256[2] calldata fCashToBorrow,\\n uint32[2] calldata maxBorrowRate,\\n uint32[2] calldata minRollLendRate\\n) external override returns (uint256[2] memory underlyingTokensTransferred) {\\n```\\n
null
```\\nrequire(currencies[0] != 0 && currencies[1] != 0);\\n```\\n
An account roll may be impossible if the vault is already at the maximum borrow capacity.
low
One of the actions allowed in Notional Strategy Vaults is to roll an account's maturity to a later one by borrowing from a later maturity and repaying that into the debt of the earlier maturity.\\nHowever, this could cause an issue if the vault is at maximum capacity at the time of the roll. When an account performs this type of roll, the new borrow would have to be more than the existing debt simply because it has to at least cover the existing debt and pay for the borrow fees that get added on every new borrow. Since the whole vault was already at max borrow capacity before with the old, smaller borrow, this process would revert at the end after the new borrow as well once the process gets to `VaultAccount.updateAccountfCash` and VaultConfiguration.updateUsedBorrowCapacity:\\n```\\nfunction updateUsedBorrowCapacity(\\n address vault,\\n uint16 currencyId,\\n int256 netfCash\\n) internal returns (int256 totalUsedBorrowCapacity) {\\n VaultBorrowCapacityStorage storage cap = LibStorage.getVaultBorrowCapacity()[vault][currencyId];\\n\\n // Update the total used borrow capacity, when borrowing this number will increase (netfCash < 0),\\n // when lending this number will decrease (netfCash > 0).\\n totalUsedBorrowCapacity = int256(uint256(cap.totalUsedBorrowCapacity)).sub(netfCash);\\n if (netfCash < 0) {\\n // Always allow lending to reduce the total used borrow capacity to satisfy the case when the max borrow\\n // capacity has been reduced by governance below the totalUsedBorrowCapacity. When borrowing, it cannot\\n // go past the limit.\\n require(totalUsedBorrowCapacity <= int256(uint256(cap.maxBorrowCapacity)), "Max Capacity");\\n```\\n\\nThe result is that users won't able to roll while the vault is at max capacity. However, users may exit some part of their position to reduce their borrow, thereby reducing the overall vault borrow capacity, and then could execute the roll. A bigger problem would occur if the vault configuration got updated to massively reduce the borrow capacity, which would force users to exit their position more significantly with likely a much smaller chance at being able to roll.
Document this case so that users can realise that rolling may not always be an option. Perhaps consider adding ways where users can pay a small deposit, like on `enterVault`, to offset the additional difference in borrows and pay for fees so they can remain with essentially the same size position within Notional.
null
```\\nfunction updateUsedBorrowCapacity(\\n address vault,\\n uint16 currencyId,\\n int256 netfCash\\n) internal returns (int256 totalUsedBorrowCapacity) {\\n VaultBorrowCapacityStorage storage cap = LibStorage.getVaultBorrowCapacity()[vault][currencyId];\\n\\n // Update the total used borrow capacity, when borrowing this number will increase (netfCash < 0),\\n // when lending this number will decrease (netfCash > 0).\\n totalUsedBorrowCapacity = int256(uint256(cap.totalUsedBorrowCapacity)).sub(netfCash);\\n if (netfCash < 0) {\\n // Always allow lending to reduce the total used borrow capacity to satisfy the case when the max borrow\\n // capacity has been reduced by governance below the totalUsedBorrowCapacity. When borrowing, it cannot\\n // go past the limit.\\n require(totalUsedBorrowCapacity <= int256(uint256(cap.maxBorrowCapacity)), "Max Capacity");\\n```\\n
Rollover might introduce economically impractical deposits of dust into a strategy
low
During the rollover of the strategy position into a longer maturity, several things happen:\\nFunds are borrowed from the longer maturity to pay off the debt and fees of the current maturity.\\nStrategy tokens that are associated with the current maturity are moved to the new maturity.\\nAny additional funds provided by the account are deposited into the strategy into a new longer maturity.\\nIn reality, due to the AMM nature of the protocol, the funds borrowed from the new maturity could exceed the debt the account has in the current maturity, resulting in a non-zero `vaultAccount.tempCashBalance`. In that case, those funds will be deposited into the strategy. That would happen even if there are no external funds supplied by the account for the deposit.\\nIt is possible that the dust in the temporary account balance will not cover the gas cost of triggering a full deposit call of the strategy.\\n```\\nuint256 strategyTokensMinted = vaultConfig.deposit(\\n vaultAccount.account, vaultAccount.tempCashBalance, vaultState.maturity, additionalUnderlyingExternal, vaultData\\n);\\n```\\n
We suggest that additional checks are introduced that would check that on rollover `vaultAccount.tempCashBalance + additionalUnderlyingExternal > 0` or larger than a certain threshold like `minAccountBorrowSize` for example.
null
```\\nuint256 strategyTokensMinted = vaultConfig.deposit(\\n vaultAccount.account, vaultAccount.tempCashBalance, vaultState.maturity, additionalUnderlyingExternal, vaultData\\n);\\n```\\n
Strategy vault swaps can be frontrun
low
Some strategy vaults utilize borrowing one currency, swapping it for another, and then using the new currency somewhere to generate yield. For example, the CrossCurrencyfCash strategy vault could borrow USDC, swap it for DAI, and then deposit that DAI back into Notional if the DAI lending interest rates are greater than USDC borrowing interest rates. However, during vault settlement the assets would need to be swapped back into the original borrow currency.\\nSince these vaults control the borrowed assets that go only into white-listed strategies, the Notional system allows users to borrow multiples of their posted collateral and claim the yield from a much larger position. As a result, these strategy vaults would likely have significant funds being borrowed and managed into these strategies.\\nHowever, as mentioned above, these strategies usually utilize a trading mechanism to swap borrowed currencies into whatever is required by the strategy, and these trades may be quite large. In fact, the `BaseStrategyVault` implementation contains functions that interact with Notional's trading module to assist with those swaps:\\n```\\n/// @notice Can be used to delegate call to the TradingModule's implementation in order to execute\\n/// a trade.\\nfunction \\_executeTrade(\\n uint16 dexId,\\n Trade memory trade\\n) internal returns (uint256 amountSold, uint256 amountBought) {\\n (bool success, bytes memory result) = nProxy(payable(address(TRADING\\_MODULE))).getImplementation()\\n .delegatecall(abi.encodeWithSelector(ITradingModule.executeTrade.selector, dexId, trade));\\n require(success);\\n (amountSold, amountBought) = abi.decode(result, (uint256, uint256));\\n}\\n\\n/// @notice Can be used to delegate call to the TradingModule's implementation in order to execute\\n/// a trade.\\nfunction \\_executeTradeWithDynamicSlippage(\\n uint16 dexId,\\n Trade memory trade,\\n uint32 dynamicSlippageLimit\\n) internal returns (uint256 amountSold, uint256 amountBought) {\\n (bool success, bytes memory result) = nProxy(payable(address(TRADING\\_MODULE))).getImplementation()\\n .delegatecall(abi.encodeWithSelector(\\n ITradingModule.executeTradeWithDynamicSlippage.selector,\\n dexId, trade, dynamicSlippageLimit\\n )\\n );\\n require(success);\\n (amountSold, amountBought) = abi.decode(result, (uint256, uint256));\\n}\\n```\\n\\nAlthough some strategies may manage stablecoin <-> stablecoin swaps that typically would incur low slippage, large size trades could still suffer from low on-chain liquidity and end up getting frontrun and “sandwiched” by MEV bots or other actors, thereby extracting maximum amount from the strategy vault swaps as slippage permits. This could be especially significant during vaults' settlements, that can be initiated by anyone, as lending currencies may be swapped in large batches and not do it on a per-account basis. For example with the CrossCurrencyfCash vault, it can only enter settlement if all strategy tokens (lending currency in this case) are gone and swapped back into the borrow currency:\\n```\\nif (vaultState.totalStrategyTokens == 0) {\\n NOTIONAL.settleVault(address(this), maturity);\\n}\\n```\\n\\nAs a result, in addition to the risk of stablecoins' getting off-peg, unfavorable market liquidity conditions and arbitrage-seeking actors could eat into the profits generated by this strategy as per the maximum allowed slippage. However, during settlement the strategy vaults don't have the luxury of waiting for the right conditions to perform the trade as the borrows need to repaid at their maturities.\\nSo, the profitability of the vaults, and therefore users, could suffer due to potential low market liquidity allowing high slippage and risks of being frontrun with the chosen strategy vaults' currencies.
Ensure that the currencies chosen to generate yield in the strategy vaults have sufficient market liquidity on exchanges allowing for low slippage swaps.
null
```\\n/// @notice Can be used to delegate call to the TradingModule's implementation in order to execute\\n/// a trade.\\nfunction \\_executeTrade(\\n uint16 dexId,\\n Trade memory trade\\n) internal returns (uint256 amountSold, uint256 amountBought) {\\n (bool success, bytes memory result) = nProxy(payable(address(TRADING\\_MODULE))).getImplementation()\\n .delegatecall(abi.encodeWithSelector(ITradingModule.executeTrade.selector, dexId, trade));\\n require(success);\\n (amountSold, amountBought) = abi.decode(result, (uint256, uint256));\\n}\\n\\n/// @notice Can be used to delegate call to the TradingModule's implementation in order to execute\\n/// a trade.\\nfunction \\_executeTradeWithDynamicSlippage(\\n uint16 dexId,\\n Trade memory trade,\\n uint32 dynamicSlippageLimit\\n) internal returns (uint256 amountSold, uint256 amountBought) {\\n (bool success, bytes memory result) = nProxy(payable(address(TRADING\\_MODULE))).getImplementation()\\n .delegatecall(abi.encodeWithSelector(\\n ITradingModule.executeTradeWithDynamicSlippage.selector,\\n dexId, trade, dynamicSlippageLimit\\n )\\n );\\n require(success);\\n (amountSold, amountBought) = abi.decode(result, (uint256, uint256));\\n}\\n```\\n
ConvexPositionHandler._claimRewards incorrectly calculates amount of LP tokens to unstake
high
`ConvexPositionHandler._claimRewards` is an internal function that harvests Convex reward tokens and takes the generated yield in ETH out of the Curve pool by calculating the difference in LP token price. To do so, it receives the current share price of the curve LP tokens and compares it to the last one stored in the contract during the last rewards claim. The difference in share price is then multiplied by the LP token balance to get the ETH yield via the `yieldEarned` variable:\\n```\\nuint256 currentSharePrice = ethStEthPool.get\\_virtual\\_price();\\nif (currentSharePrice > prevSharePrice) {\\n // claim any gain on lp token yields\\n uint256 contractLpTokenBalance = lpToken.balanceOf(address(this));\\n uint256 totalLpBalance = contractLpTokenBalance +\\n baseRewardPool.balanceOf(address(this));\\n uint256 yieldEarned = (currentSharePrice - prevSharePrice) \\*\\n totalLpBalance;\\n```\\n\\nHowever, to receive this ETH yield, LP tokens need to be unstaked from the Convex pool and then converted via the Curve pool. To do this, the contract introduces lpTokenEarned:\\n```\\nuint256 lpTokenEarned = yieldEarned / NORMALIZATION\\_FACTOR; // 18 decimal from virtual price\\n```\\n\\nThis calculation is incorrect. It uses yieldEarned which is denominated in ETH and simply divides it by the normalization factor to get the correct number of decimals, which still returns back an amount denominated in ETH, whereas an amount denominated in LP tokens should be returned instead.\\nThis could lead to significant accounting issues including losses in the “no-loss” parts of the vault's strategy as 1 LP token is almost always guaranteed to be worth more than 1 ETH. So, when the intention is to withdraw `X` ETH worth of an LP token, withdrawing `X` LP tokens will actually withdraw `Y` ETH worth of an LP token, where `Y>X`. As a result, less than expected ETH will remain in the Convex handler part of the vault, and the ETH yield will go to the Lyra options, which are much riskier. In the event Lyra options don't work out and there is more ETH withdrawn than expected, there is a possibility that this would result in a loss for the vault.
The fix is straightforward and that is to calculate `lpTokenEarned` using the `currentSharePrice` already received from the Curve pool. That way, it is the amount of LP tokens that will be sent to be unwrapped and unstaked from the Convex and Curve pools. This will also take care of the normalization factor. `uint256 `lpTokenEarned` = yieldEarned / currentSharePrice;`
null
```\\nuint256 currentSharePrice = ethStEthPool.get\\_virtual\\_price();\\nif (currentSharePrice > prevSharePrice) {\\n // claim any gain on lp token yields\\n uint256 contractLpTokenBalance = lpToken.balanceOf(address(this));\\n uint256 totalLpBalance = contractLpTokenBalance +\\n baseRewardPool.balanceOf(address(this));\\n uint256 yieldEarned = (currentSharePrice - prevSharePrice) \\*\\n totalLpBalance;\\n```\\n
The WETH tokens are not taken into account in the ConvexTradeExecutor.totalFunds function
high
The `totalFunds` function of every executor should include all the funds that belong to the contract:\\n```\\nfunction totalFunds() public view override returns (uint256, uint256) {\\n return ConvexPositionHandler.positionInWantToken();\\n}\\n```\\n\\nThe `ConvexTradeExecutor` uses this function for calculations:\\n```\\nfunction positionInWantToken()\\n public\\n view\\n override\\n returns (uint256, uint256)\\n{\\n (\\n uint256 stakedLpBalanceInETH,\\n uint256 lpBalanceInETH,\\n uint256 ethBalance\\n ) = \\_getTotalBalancesInETH(true);\\n\\n return (\\n stakedLpBalanceInETH + lpBalanceInETH + ethBalance,\\n block.number\\n );\\n}\\n```\\n\\n```\\nfunction \\_getTotalBalancesInETH(bool useVirtualPrice)\\n internal\\n view\\n returns (\\n uint256 stakedLpBalance,\\n uint256 lpTokenBalance,\\n uint256 ethBalance\\n )\\n{\\n uint256 stakedLpBalanceRaw = baseRewardPool.balanceOf(address(this));\\n uint256 lpTokenBalanceRaw = lpToken.balanceOf(address(this));\\n\\n uint256 totalLpBalance = stakedLpBalanceRaw + lpTokenBalanceRaw;\\n\\n // Here, in order to prevent price manipulation attacks via curve pools,\\n // When getting total position value -> its calculated based on virtual price\\n // During withdrawal -> calc\\_withdraw\\_one\\_coin() is used to get an actual estimate of ETH received if we were to remove liquidity\\n // The following checks account for this\\n uint256 totalLpBalanceInETH = useVirtualPrice\\n ? \\_lpTokenValueInETHFromVirtualPrice(totalLpBalance)\\n : \\_lpTokenValueInETH(totalLpBalance);\\n\\n lpTokenBalance = useVirtualPrice\\n ? \\_lpTokenValueInETHFromVirtualPrice(lpTokenBalanceRaw)\\n : \\_lpTokenValueInETH(lpTokenBalanceRaw);\\n\\n stakedLpBalance = totalLpBalanceInETH - lpTokenBalance;\\n ethBalance = address(this).balance;\\n}\\n```\\n\\nThis function includes ETH balance, LP balance, and staked balance. But WETH balance is not included here. WETH tokens are initially transferred to the contract, and before the withdrawal, the contract also stores WETH.
Include WETH balance into the `totalFunds`.
null
```\\nfunction totalFunds() public view override returns (uint256, uint256) {\\n return ConvexPositionHandler.positionInWantToken();\\n}\\n```\\n
LyraPositionHandlerL2 inaccurate modifier onlyAuthorized may lead to funds loss if keeper is compromised
medium
The `LyraPositionHandlerL2` contract is operated either by the L2 keeper or by the L1 `LyraPositionHandler` via the `L2CrossDomainMessenger`. This is implemented through the `onlyAuthorized` modifier:\\n```\\nmodifier onlyAuthorized() {\\n require(\\n ((msg.sender == L2CrossDomainMessenger &&\\n OptimismL2Wrapper.messageSender() == positionHandlerL1) ||\\n msg.sender == keeper),\\n "ONLY\\_AUTHORIZED"\\n );\\n \\_;\\n}\\n```\\n\\nThis is set on:\\n`withdraw()`\\n`openPosition()`\\n`closePosition()`\\n`setSlippage()`\\n`deposit()`\\n`sweep()`\\n`setSocketRegistry()`\\n`setKeeper()`\\nFunctions 1-3 have a corresponding implementation on the L1 `LyraPositionHandler`, so they could indeed be called by it with the right parameters. However, 4-8 do not have an implemented way to call them from L1, and this modifier creates an unnecessarily expanded list of authorised entities that can call them.\\nAdditionally, even if their implementation is provided, it needs to be done carefully because `msg.sender` in their case is going to end up being the `L2CrossDomainMessenger`. For example, the `sweep()` function sends any specified token to `msg.sender`, with the intention likely being that the recipient is under the team's or the governance's control - yet, it will be `L2CrossDomainMessenger` and the tokens will likely be lost forever instead.\\nOn the other hand, the `setKeeper()` function would need a way to be called by something other than the keeper because it is intended to change the keeper itself. In the event that the access to the L2 keeper is compromised, and the L1 `LyraPositionHandler` has no way to call `setKeeper()` on the `LyraPositionHandlerL2`, the whole contract and its funds will be compromised as well. So, there needs to be some way to at least call the `setKeeper()` by something other than the keeper to ensure security of the funds on L2.\\n```\\nfunction closePosition(bool toSettle) public override onlyAuthorized {\\n LyraController.\\_closePosition(toSettle);\\n UniswapV3Controller.\\_estimateAndSwap(\\n false,\\n LyraController.sUSD.balanceOf(address(this))\\n );\\n}\\n\\n/\\*///////////////////////////////////////////////////////////////\\n MAINTAINANCE FUNCTIONS\\n//////////////////////////////////////////////////////////////\\*/\\n\\n/// @notice Sweep tokens\\n/// @param \\_token Address of the token to sweepr\\nfunction sweep(address \\_token) public override onlyAuthorized {\\n IERC20(\\_token).transfer(\\n msg.sender,\\n IERC20(\\_token).balanceOf(address(this))\\n );\\n}\\n\\n/// @notice socket registry setter\\n/// @param \\_socketRegistry new address of socket registry\\nfunction setSocketRegistry(address \\_socketRegistry) public onlyAuthorized {\\n socketRegistry = \\_socketRegistry;\\n}\\n\\n/// @notice keeper setter\\n/// @param \\_keeper new keeper address\\nfunction setKeeper(address \\_keeper) public onlyAuthorized {\\n keeper = \\_keeper;\\n}\\n```\\n
Create an additional modifier for functions intended to be called just by the keeper (onlyKeeper) such as functions 4-7, and create an additional modifier `onlyGovernance` for the `setKeeper()` function. As an example, the L1 `Vault` contract also has a `setKeeper()` function that has a `onlyGovernance()` modifier. Please note that this will likely require implementing a function for the system's governance that can call `LyraPositionHandlerL2.setKeeper()` via the `L2CrossDomainMessenger`.
null
```\\nmodifier onlyAuthorized() {\\n require(\\n ((msg.sender == L2CrossDomainMessenger &&\\n OptimismL2Wrapper.messageSender() == positionHandlerL1) ||\\n msg.sender == keeper),\\n "ONLY\\_AUTHORIZED"\\n );\\n \\_;\\n}\\n```\\n
Harvester.harvest swaps have no slippage parameters
medium
As part of the vault strategy, all reward tokens for staking in the Convex ETH-stETH pool are claimed and swapped into ETH. The swaps for these tokens are done with no slippage at the moment, i.e. the expected output amount for all of them is given as 0.\\nIn particular, one reward token that is most susceptible to slippage is LDO, and its swap is implemented through the Uniswap router:\\n```\\nfunction \\_swapLidoForWETH(uint256 amountToSwap) internal {\\n IUniswapSwapRouter.ExactInputSingleParams\\n memory params = IUniswapSwapRouter.ExactInputSingleParams({\\n tokenIn: address(ldo),\\n tokenOut: address(weth),\\n fee: UNISWAP\\_FEE,\\n recipient: address(this),\\n deadline: block.timestamp,\\n amountIn: amountToSwap,\\n amountOutMinimum: 0,\\n sqrtPriceLimitX96: 0\\n });\\n uniswapRouter.exactInputSingle(params);\\n}\\n```\\n\\nThe swap is called with `amountOutMinimum: 0`, meaning that there is no slippage protection in this swap. This could result in a significant loss of yield from this reward as MEV bots could “sandwich” this swap by manipulating the price before this transaction and immediately reversing their action after the transaction, profiting at the expense of our swap. Moreover, the Uniswap pools seem to have low liquidity for the LDO token as opposed to Balancer or Sushiswap, further magnifying slippage issues and susceptibility to frontrunning.\\nThe other two tokens - CVX and CRV - are being swapped through their Curve pools, which have higher liquidity and are less susceptible to slippage. Nonetheless, MEV strategies have been getting more advanced and calling these swaps with 0 as expected output may place these transactions in danger of being frontrun and “sandwiched” as well.\\n```\\nif (cvxBalance > 0) {\\n cvxeth.exchange(1, 0, cvxBalance, 0, false);\\n}\\n// swap CRV to WETH\\nif (crvBalance > 0) {\\n crveth.exchange(1, 0, crvBalance, 0, false);\\n}\\n```\\n\\nIn these calls `.exchange` , the last `0` is the `min_dy` argument in the Curve pools swap functions that represents the minimum expected amount of tokens received after the swap, which is `0` in our case.
Introduce some slippage parameters into the swaps.
null
```\\nfunction \\_swapLidoForWETH(uint256 amountToSwap) internal {\\n IUniswapSwapRouter.ExactInputSingleParams\\n memory params = IUniswapSwapRouter.ExactInputSingleParams({\\n tokenIn: address(ldo),\\n tokenOut: address(weth),\\n fee: UNISWAP\\_FEE,\\n recipient: address(this),\\n deadline: block.timestamp,\\n amountIn: amountToSwap,\\n amountOutMinimum: 0,\\n sqrtPriceLimitX96: 0\\n });\\n uniswapRouter.exactInputSingle(params);\\n}\\n```\\n
Harvester.rewardTokens doesn't account for LDO tokens
medium
As part of the vault's strategy, the reward tokens for participating in Curve's ETH-stETH pool and Convex staking are claimed and swapped for ETH. This is done by having the `ConvexPositionHandler` contract call the reward claims API from Convex via `baseRewardPool.getReward()`, which transfers the reward tokens to the handler's address. Then, the tokens are iterated through and sent to the harvester to be swapped from `ConvexPositionHandler` by getting their list from `harvester.rewardTokens()` and calling `harvester.harvest()`\\n```\\n// get list of tokens to transfer to harvester\\naddress[] memory rewardTokens = harvester.rewardTokens();\\n//transfer them\\nuint256 balance;\\nfor (uint256 i = 0; i < rewardTokens.length; i++) {\\n balance = IERC20(rewardTokens[i]).balanceOf(address(this));\\n\\n if (balance > 0) {\\n IERC20(rewardTokens[i]).safeTransfer(\\n address(harvester),\\n balance\\n );\\n }\\n}\\n\\n// convert all rewards to WETH\\nharvester.harvest();\\n```\\n\\nHowever, `harvester.rewardTokens()` doesn't have the LDO token's address in its list, so they will not be transferred to the harvester to be swapped.\\n```\\nfunction rewardTokens() external pure override returns (address[] memory) {\\n address[] memory rewards = new address[](2);\\n rewards[0] = address(crv);\\n rewards[1] = address(cvx);\\n return rewards;\\n}\\n```\\n\\nAs a result, `harvester.harvest()` will not be able to execute its `_swapLidoForWETH()` function since its `ldoBalance` will be 0. This results in missed rewards and therefore yield for the vault as part of its normal flow.\\nThere is a possible mitigation in the current state of the contract that would require governance to call `sweep()` on the LDO balance from the `BaseTradeExecutor` contract (that `ConvexPositionHandler` inherits) and then transferring those LDO tokens to the harvester contract to perform the swap at a later rewards claim. This, however, requires transactions separate from the intended flow of the system as well as governance intervention.
Add the LDO token address to the `rewardTokens()` function by adding the following line `rewards[2] = address(ldo);`
null
```\\n// get list of tokens to transfer to harvester\\naddress[] memory rewardTokens = harvester.rewardTokens();\\n//transfer them\\nuint256 balance;\\nfor (uint256 i = 0; i < rewardTokens.length; i++) {\\n balance = IERC20(rewardTokens[i]).balanceOf(address(this));\\n\\n if (balance > 0) {\\n IERC20(rewardTokens[i]).safeTransfer(\\n address(harvester),\\n balance\\n );\\n }\\n}\\n\\n// convert all rewards to WETH\\nharvester.harvest();\\n```\\n
Keeper design complexity
medium
The current design of the protocol relies on the keeper being operated correctly in a complex manner. Since the offchain code for the keeper wasn't in scope of this audit, the following is a commentary on the complexity of the keeper operations in the context of the contracts. Keeper logic such as the order of operations and function argument parameters with log querying are some examples where if the keeper doesn't execute them correctly, there may be inconsistencies and issues with accounting of vault shares and vault funds resulting in unexpected behaviour. While it may represent little risk or issues to the current Brahma-fi team as the vault is recently live, the keeper logic and exact steps should be well documented so that public keepers (if and when they are enabled) can execute the logic securely and future iterations of the vault code can account for any intricacies of the keeper logic.\\n1. Order of operations: Convex rewards & new depositors profiting at the expense of old depositors' yielded reward tokens. As part of the vault's strategy, the depositors' ETH is provided to Curve and the LP tokens are staked in Convex, which yield rewards such as CRV, CVX, and LDO tokens. As new depositors provide their ETH, the vault shares minted for their deposits will be less compared to old deposits as they account for the increasing value of LP tokens staked in these pools. In other words, if the first depositor provides 1 ETH, then when a new depositor provides 1 ETH much later, the new depositor will get less shares back as the `totalVaultFunds()` will increase:\\n```\\nshares = totalSupply() > 0\\n ? (totalSupply() \\* amountIn) / totalVaultFunds()\\n : amountIn;\\n```\\n\\n```\\nfunction totalVaultFunds() public view returns (uint256) {\\n return\\n IERC20(wantToken).balanceOf(address(this)) + totalExecutorFunds();\\n}\\n```\\n\\n```\\nfunction totalFunds() public view override returns (uint256, uint256) {\\n return ConvexPositionHandler.positionInWantToken();\\n}\\n```\\n\\n```\\nfunction positionInWantToken()\\n public\\n view\\n override\\n returns (uint256, uint256)\\n{\\n (\\n uint256 stakedLpBalanceInETH,\\n uint256 lpBalanceInETH,\\n uint256 ethBalance\\n ) = \\_getTotalBalancesInETH(true);\\n\\n return (\\n stakedLpBalanceInETH + lpBalanceInETH + ethBalance,\\n block.number\\n );\\n}\\n```\\n\\nHowever, this does not account for the reward tokens yielded throughout that time. From the smart contract logic alone, there is no requirement to first execute the reward token harvest. It is up to the keeper to execute `ConvexTradeExecutor.claimRewards` in order to claim and swap their rewards into ETH, which only then will be included into the yield in the above `ConvexPositionHandler.positionInWantToken` function. If this is not done prior to processing new deposits and minting new shares, new depositors would unfairly benefit from the reward tokens' yield that was generated before they deposited but accounted for in the vault funds only after they deposited.\\n2. Order of operations: closing Lyra options before processing new deposits.\\nThe other part of the vault's strategy is utilising the yield from Convex to purchase options from Lyra on Optimism. While Lyra options are risky and can become worthless in the event of bad trades, only yield is used for them, therefore keeping user deposits' initial value safe. However, their value could also yield significant returns, increasing the overall funds of the vault. Just as with `ConvexTradeExecutor`, `LyraTradeExecutor` also has a `totalFunds()` function that feeds into the vault's `totalVaultFunds()` function. In Lyra's case, however, it is a manually set value by the keeper that is supposed to represent the value of Lyra L2 options:\\n```\\nfunction totalFunds()\\n public\\n view\\n override\\n returns (uint256 posValue, uint256 lastUpdatedBlock)\\n{\\n return (\\n positionInWantToken.posValue +\\n IERC20(vaultWantToken()).balanceOf(address(this)),\\n positionInWantToken.lastUpdatedBlock\\n );\\n}\\n```\\n\\n```\\nfunction setPosValue(uint256 \\_posValue) public onlyKeeper {\\n LyraPositionHandler.\\_setPosValue(\\_posValue);\\n}\\n```\\n\\n```\\nfunction \\_setPosValue(uint256 \\_posValue) internal {\\n positionInWantToken.posValue = \\_posValue;\\n positionInWantToken.lastUpdatedBlock = block.number;\\n}\\n```\\n\\nSolely from the smart contract logic, there is a possibility that a user deposits when Lyra options are valued high, meaning the total vault funds are high as well, thus decreasing the amount of shares the user would have received if it weren't for the Lyra options' value. Consequently, if after the deposit the Lyra options become worthless, decreasing the total vault funds, the user's newly minted shares will now represent less than what they have deposited.\\nWhile this is not currently mitigated by smart contract logic, it may be worked around by the keeper first settling and closing all Lyra options and transferring all their yielded value in ETH, if any, to the Convex trade executor. Only then the keeper would process new deposits and mint new shares. This order of operations is critical to maintain the vault's intended safe strategy of maintaining the user's deposited value, and is dependent entirely on the keeper offchain logic.\\n3. Order of operations: additional trade executors and their specific management Similarly to the above examples, as more trade executors and position handlers are added to the vault, the complexity for the keeper will go up significantly, requiring it to maintain all correct orders of operations not just to keep the shares and funds accounting intact, but simply for the trade executors to function normally. For example, in the case of Lyra, the keepers need to manually call `confirmDeposit` and `confirmWithdraw` to update their `depositStatus` and `withdrawalStatus` respectively to continue normal operations or otherwise new deposits and withdrawals wouldn't be processed. On the other hand, the Convex executor does it automatically. Due to the system design, there may be no single standard way to handle a trade executor. New executors may also require specific calls to be done manually, increasing overall complexity keeper logic to support the system.\\n4. Keeper calls & arguments: depositFunds/batchDeposit and initiateWithdrawal/batchWithdraw `userAddresses[]` array + gas overhead With the current gated approach and batching for deposits and withdrawals to and from the vault, users aren't able to directly mint and redeem their vault shares. Instead, they interact with the `Batcher` contract that then communicates with the `Vault` contract with the help of the keeper. However, while each user's deposit and withdrawal amounts are registered in the contract state variables such as `depositLedger[user]` and `withdrawLedger[user]`, and there is an event emitted with the user address and their action, to process them the keeper is required to keep track of all the user addresses in the batch they need to process. In particular, the keeper needs to provide `address[] memory users` for both `batchDeposit()` and `batchWithdraw()` functions that communicate with the vault. There is no stored list of users within the contract that could provide or verify the right users, so it is entirely up to the keeper's offchain logic to query the logs and retrieve the addresses required. Therefore, depending on the size of the `address[] memory users` array, the keepers may need to consider the transaction gas limit, possibly requiring splitting the array up and doing several transactions to process all of them. In addition, in the event of withdrawals, the keepers need to calculate how much of the `wantToken` (WETH in our case) will be required to process the withdrawals, and call `withdrawFromExecutor()` with that amount to provide enough assets to cover withdrawals from the vault.\\n5. Timing: 50 block radius for updates on trade executors that need to have their values updated via a call Some trade executors, like the Convex one, can retrieve their funds value at any time from Layer 1, thereby always being up to date with the current block. Others, like the Lyra trade executor, require the keeper to update their position value by initiating a call, which also updates their `positionInWantToken.lastUpdatedBlock` state variable. However, this variable is also called during during the vault.totalVaultFunds()call during deposits and withdrawals via `totalExecutorFunds()`, which eventually calls `areFundsUpdated(blockUpdated)`. This is a check to ensure that the current transaction's `block.number <= _blockUpdated + BLOCK_LIMIT`, where BLOCK_LIMIT=50 blocks, i.e. roughly 12-15 min. As a result, keepers need to make sure that all executors that require a call for this have their position values updated before and rather close to processing and deposits or withdrawals, or `areFundsUpdated()` will revert those calls.
Document the exact order of operations, steps, necessary logs and parameters that keepers need to keep track of in order for the vault strategy to succeed.
null
```\\nshares = totalSupply() > 0\\n ? (totalSupply() \\* amountIn) / totalVaultFunds()\\n : amountIn;\\n```\\n
Approving MAX_UINT amount of ERC20 tokens
low
Approving the maximum value of uint256 is a known practice to save gas. However, this pattern was proven to increase the impact of an attack many times in the past, in case the approved contract gets hacked.\\n```\\nIERC20(vaultWantToken()).approve(vault, MAX\\_INT);\\n```\\n\\n```\\nIERC20(vaultInfo.tokenAddress).approve(vaultAddress, type(uint256).max);\\n```\\n\\n```\\nIERC20(LP\\_TOKEN).safeApprove(ETH\\_STETH\\_POOL, type(uint256).max);\\n\\n// Approve max LP tokens to convex booster\\nIERC20(LP\\_TOKEN).safeApprove(\\n address(CONVEX\\_BOOSTER),\\n type(uint256).max\\n);\\n```\\n\\n```\\ncrv.safeApprove(address(crveth), type(uint256).max);\\n// max approve CVX to CVX/ETH pool on curve\\ncvx.safeApprove(address(cvxeth), type(uint256).max);\\n// max approve LDO to uniswap swap router\\nldo.safeApprove(address(uniswapRouter), type(uint256).max);\\n```\\n\\n```\\nIERC20(wantTokenL2).safeApprove(\\n address(UniswapV3Controller.uniswapRouter),\\n type(uint256).max\\n);\\n// approve max susd balance to uniV3 router\\nLyraController.sUSD.safeApprove(\\n address(UniswapV3Controller.uniswapRouter),\\n type(uint256).max\\n);\\n```\\n
Consider approving the exact amount that's needed to be transferred, or alternatively, add an external function that allows the revocation of approvals.
null
```\\nIERC20(vaultWantToken()).approve(vault, MAX\\_INT);\\n```\\n
Batcher.depositFunds may allow for more deposits than vaultInfo.maxAmount
low
As part of a gradual rollout strategy, the Brahma-fi system of contracts has a limit of how much can be deposited into the protocol. This is implemented through the `Batcher` contract that allows users to deposit into it and keep the amount they have deposited in the `depositLedger[recipient]` state variable. In order to cap how much is deposited, the user's input `amountIn` is evaluated within the following statement:\\n```\\nrequire(\\n IERC20(vaultInfo.vaultAddress).totalSupply() +\\n pendingDeposit -\\n pendingWithdrawal +\\n amountIn <=\\n vaultInfo.maxAmount,\\n "MAX\\_LIMIT\\_EXCEEDED"\\n);\\n```\\n\\nHowever, while `pendingDeposit`, `amountIn`, and `vaultInfo.maxAmount` are denominated in the vault asset token (WETH in our case), `IERC20(vaultInfo.vaultAddress).totalSupply()` and `pendingWithdrawal` represent vault shares tokens, creating potential mismatches in this evaluation.\\nAs the yield brings in more and more funds to the vault, the amount of share minted for each token deposited in decreases, so `totalSupply()` becomes less than the total deposited amount (not just vault funds) as the strategy succeeds over time. For example, at first `X` deposited tokens would mint `X` shares. After some time, this would create additional funds in the vault through yield, and another `X` deposit of tokens would mint less than `X` shares, say `X-Y`, where `Y` is some number greater than 0 representing the difference in the number of shares minted. So, while there were `2*X` deposited tokens, `totalSupply()=(2*X-Y)` shares would have been minted in total. However, at the time of the next deposit, a user's `amountIn` will be added with `totalSupply()=(2*X-Y)` number of shares instead of a greater `2*X` number of deposited tokens. So, this will undershoot the actual amount of tokens deposited after this user's deposit, thus potentially evaluating it less than `maxAmount`, and letting more user deposits get inside the vault than what was intended.
Consider either documenting this potential discrepancy or keeping track of all deposits in a state variable and using that inside the `require` statement..
null
```\\nrequire(\\n IERC20(vaultInfo.vaultAddress).totalSupply() +\\n pendingDeposit -\\n pendingWithdrawal +\\n amountIn <=\\n vaultInfo.maxAmount,\\n "MAX\\_LIMIT\\_EXCEEDED"\\n);\\n```\\n
BaseTradeExecutor.confirmDeposit | confirmWithdraw - Violation of the “checks-effects-interactions” pattern
low
Both `confirmDeposit, confirmWithdraw` might be re-entered by the keeper (in case it is a contract), in case the derived contract allows the execution of untrusted code.\\n```\\nfunction confirmDeposit() public override onlyKeeper {\\n require(depositStatus.inProcess, "DEPOSIT\\_COMPLETED");\\n \\_confirmDeposit();\\n depositStatus.inProcess = false;\\n}\\n```\\n\\n```\\nfunction confirmWithdraw() public override onlyKeeper {\\n require(withdrawalStatus.inProcess, "WIHDRW\\_COMPLETED");\\n \\_confirmWithdraw();\\n withdrawalStatus.inProcess = false;\\n}\\n```\\n
Although the impact is very limited, it is recommended to implement the “checks-effects-interactions” in both functions.
null
```\\nfunction confirmDeposit() public override onlyKeeper {\\n require(depositStatus.inProcess, "DEPOSIT\\_COMPLETED");\\n \\_confirmDeposit();\\n depositStatus.inProcess = false;\\n}\\n```\\n
Reactivated gauges can't queue up rewards
high
Active gauges as set in `ERC20Gauges.addGauge()` function by authorised users get their rewards queued up in the `FlywheelGaugeRewards._queueRewards()` function. As part of it, their associated struct `QueuedRewards` updates its `storedCycle` value to the cycle in which they get queued up:\\n```\\ngaugeQueuedRewards[gauge] = QueuedRewards({\\n priorCycleRewards: queuedRewards.priorCycleRewards + completedRewards,\\n cycleRewards: uint112(nextRewards),\\n storedCycle: currentCycle\\n});\\n```\\n\\nHowever, these gauges may be deactivated in `ERC20Gauges.removeGauge()`, and they will now be ignored in either `FlywheelGaugeRewards.queueRewardsForCycle()` or `FlywheelGaugeRewards.queueRewardsForCyclePaginated()` because both use `gaugeToken.gauges()` to get the set of gauges for which to queue up rewards for the cycle, and that only gives active gauges. Therefore, any updates `FlywheelGaugeRewards` makes to its state will not be done to deactivated gauges' `QueuedRewards` structs. In particular, the `gaugeCycle` contract state variable will keep advancing throughout its cycles, while `QueuedRewards.storedCycle` will retain its previously set value, which is the cycle where it was queued and not 0.\\nOnce reactivated later with at least 1 full cycle being done without it, it will produce issues. It will now be returned by `gaugeToken.gauges()` to be processed in either FlywheelGaugeRewards.queueRewardsForCycle()or `FlywheelGaugeRewards.queueRewardsForCyclePaginated()`, but, once the reactivated gauge is passed to `_queueRewards()`, it will fail an assert:\\n```\\nassert(queuedRewards.storedCycle == 0 || queuedRewards.storedCycle >= lastCycle);\\n```\\n\\nThis is because it already has a set value from the cycle it was processed in previously (i.e. storedCycle>0), and, since that cycle is at least 1 full cycle behind the state contract, it will also not pass the second condition `queuedRewards.storedCycle >= lastCycle`.\\nThe result is that this gauge is locked out of queuing up for rewards because `queuedRewards.storedCycle` is only synchronised with the contract's cycle later in `_queueRewards()` which will now always fail for this gauge.
Account for the reactivated gauges that previously went through the rewards queue process, such as introducing a separate flow for newly activated gauges. However, any changes such as removing the above mentioned `assert()` should be carefully validated for other downstream logic that may use the `QueuedRewards.storedCycle` value. Therefore, it is recommended to review the state transitions as opposed to only passing this specific check.
null
```\\ngaugeQueuedRewards[gauge] = QueuedRewards({\\n priorCycleRewards: queuedRewards.priorCycleRewards + completedRewards,\\n cycleRewards: uint112(nextRewards),\\n storedCycle: currentCycle\\n});\\n```\\n
Reactivated gauges have incorrect accounting for the last cycle's rewards
medium
As described in https://github.com/ConsenSysDiligence/fei-labs-audit-2022-04/issues/3, reactivated gauges that previously had queued up rewards have a mismatch between their `storedCycle` and contract's `gaugeCycle` state variable.\\nDue to this mismatch, there is also a resulting issue with the accounting logic for its completed rewards:\\n```\\nuint112 completedRewards = queuedRewards.storedCycle == lastCycle ? queuedRewards.cycleRewards : 0;\\n```\\n\\nConsequently, this then produces an incorrect value for QueuedRewards.priorCycleRewards:\\n```\\npriorCycleRewards: queuedRewards.priorCycleRewards + completedRewards,\\n```\\n\\nAs now `completedRewards` will be equal to 0 instead of the previous cycle's rewards for that gauge. This may cause a loss of rewards accounted for this gauge as this value is later used in `getAccruedRewards()`.
Consider changing the logic of the check so that `storedCycle` values further in the past than `lastCycle` may produce the right rewards return for this expression, such as using `<=` instead of `==` and adding an explicit check for `storedCycle` `==` 0 to account for the initial scenario.
null
```\\nuint112 completedRewards = queuedRewards.storedCycle == lastCycle ? queuedRewards.cycleRewards : 0;\\n```\\n
Lack of input validation in delegateBySig
low
```\\nfunction delegateBySig(\\n address delegatee,\\n uint256 nonce,\\n uint256 expiry,\\n uint8 v,\\n bytes32 r,\\n bytes32 s\\n) public {\\n require(block.timestamp <= expiry, "ERC20MultiVotes: signature expired");\\n address signer = ecrecover(\\n keccak256(\\n abi.encodePacked(\\n "\\x19\\x01",\\n DOMAIN\\_SEPARATOR(),\\n keccak256(abi.encode(DELEGATION\\_TYPEHASH, delegatee, nonce, expiry))\\n )\\n ),\\n v,\\n r,\\n s\\n );\\n require(nonce == nonces[signer]++, "ERC20MultiVotes: invalid nonce");\\n \\_delegate(signer, delegatee);\\n}\\n```\\n
Introduce a zero address check i.e `require signer!=address(0)` and check if the recovered signer is an expected address. Refer to ERC20's permit for inspiration.
null
```\\nfunction delegateBySig(\\n address delegatee,\\n uint256 nonce,\\n uint256 expiry,\\n uint8 v,\\n bytes32 r,\\n bytes32 s\\n) public {\\n require(block.timestamp <= expiry, "ERC20MultiVotes: signature expired");\\n address signer = ecrecover(\\n keccak256(\\n abi.encodePacked(\\n "\\x19\\x01",\\n DOMAIN\\_SEPARATOR(),\\n keccak256(abi.encode(DELEGATION\\_TYPEHASH, delegatee, nonce, expiry))\\n )\\n ),\\n v,\\n r,\\n s\\n );\\n require(nonce == nonces[signer]++, "ERC20MultiVotes: invalid nonce");\\n \\_delegate(signer, delegatee);\\n}\\n```\\n
Decreasing maxGauges does not account for users' previous gauge list size.
low
`ERC20Gauges` contract has a `maxGauges` state variable meant to represent the maximum amount of gauges a user can allocate to. As per the natspec, it is meant to protect against gas DOS attacks upon token transfer to allow complicated transactions to fit in a block. There is also a function `setMaxGauges` for authorised users to decrease or increase this state variable.\\n```\\nfunction setMaxGauges(uint256 newMax) external requiresAuth {\\n uint256 oldMax = maxGauges;\\n maxGauges = newMax;\\n\\n emit MaxGaugesUpdate(oldMax, newMax);\\n}\\n```\\n\\nHowever, if it is decreased and there are users that have already reached the previous maximum that was larger, there may be unexpected behavior. All of these users' gauges will remain active and manageable, such as have user gauge weights incremented or decremented. So it could be possible that for such a user address `user_address`, numUserGauges(user_address) > `maxGauges`. While in the current contract logic this does not cause issues, `maxGauges` is a public variable that may be used by other systems. If unaccounted for, this discrepancy between the contract's `maxGauges` and the users' actual number of gauges given by `numUserGauges()` could, for example, cause gauges to be skipped or fail loops bounded by `maxGauges` in other systems' logic that try and go through all user gauges.
Either document the potential discrepancy between the user gauges size and the `maxGauges` state variable, or limit `maxGauges` to be only called within the contract thereby forcing other contracts to retrieve user gauge list size through `numUserGauges()`.
null
```\\nfunction setMaxGauges(uint256 newMax) external requiresAuth {\\n uint256 oldMax = maxGauges;\\n maxGauges = newMax;\\n\\n emit MaxGaugesUpdate(oldMax, newMax);\\n}\\n```\\n
Decrementing a gauge by 0 that is not in the user gauge list will fail an assert.
low
`ERC20Gauges._decrementGaugeWeight` has an edge case scenario where a user can attempt to decrement a `gauge` that is not in the user `gauge` list by 0 `weight`, which would trigger a failure in an assert.\\n```\\nfunction \\_decrementGaugeWeight(\\n address user,\\n address gauge,\\n uint112 weight,\\n uint32 cycle\\n) internal {\\n uint112 oldWeight = getUserGaugeWeight[user][gauge];\\n\\n getUserGaugeWeight[user][gauge] = oldWeight - weight;\\n if (oldWeight == weight) {\\n // If removing all weight, remove gauge from user list.\\n assert(\\_userGauges[user].remove(gauge));\\n }\\n```\\n\\nAs `_decrementGaugeWeight`, `decrementGauge`, or `decrementGauges` don't explicitly check that a `gauge` belongs to the user, the contract logic continues with its operations in `_decrementGaugeWeight` for any gauges passed to it. In general this is fine because if a user tries to decrement non-zero `weight` from a `gauge` they have no allocation to, thus getting `getUserGaugeWeight[user][gauge]=0`, there would be a revert due to a negative value being passed to `getUserGaugeWeight[user][gauge]`\\n```\\nuint112 oldWeight = getUserGaugeWeight[user][gauge];\\n\\ngetUserGaugeWeight[user][gauge] = oldWeight - weight;\\n```\\n\\nHowever, passing a `weight=0` parameter with a `gauge` that doesn't belong to the user, would successfully process that line. This would then be followed by an evaluation `if (oldWeight == weight)`, which would also succeed since both are 0, to finally reach an assert that will verify a remove of that `gauge` from the user `gauge` list. However, it will fail since it was never there in the first place.\\n```\\nassert(\\_userGauges[user].remove(gauge));\\n```\\n\\nAlthough an edge case with no effect on contract state's health, it may happen with front end bugs or incorrect user transactions, and it is best not to have asserts fail.
Replace `assert()` with a `require()` or verify that the gauge belongs to the user prior to performing any operations.
null
```\\nfunction \\_decrementGaugeWeight(\\n address user,\\n address gauge,\\n uint112 weight,\\n uint32 cycle\\n) internal {\\n uint112 oldWeight = getUserGaugeWeight[user][gauge];\\n\\n getUserGaugeWeight[user][gauge] = oldWeight - weight;\\n if (oldWeight == weight) {\\n // If removing all weight, remove gauge from user list.\\n assert(\\_userGauges[user].remove(gauge));\\n }\\n```\\n
Undelegating 0 votes from an address who is not a delegate of a user will fail an assert.
low
Similar scenario with issue 5.5. `ERC20MultiVotes._undelegate` has an edge case scenario where a user can attempt to undelegate from a `delegatee` that is not in the user delegates list by 0 `amount`, which would trigger a failure in an assert.\\n```\\nfunction \\_undelegate(\\n address delegator,\\n address delegatee,\\n uint256 amount\\n) internal virtual {\\n uint256 newDelegates = \\_delegatesVotesCount[delegator][delegatee] - amount;\\n\\n if (newDelegates == 0) {\\n assert(\\_delegates[delegator].remove(delegatee)); // Should never fail.\\n }\\n```\\n\\nAs `_undelegate`, or `undelegate` don't explicitly check that a `delegatee` belongs to the user, the contract logic continues with its operations in `_undelegate` for the `delegatee` passed to it. In general this is fine because if a user tries to `undelegate` non-zero `amount` from a `delegatee` they have no votes delegated to, thus getting `_delegatesVotesCount[delegator][delegatee]=0`, there would be a revert due to a negative value being passed to `uint256 newDelegates`\\n```\\nuint256 newDelegates = \\_delegatesVotesCount[delegator][delegatee] - amount;\\n```\\n\\nHowever, passing a `amount=0` parameter with a `delegatee` that doesn't belong to the user, would successfully process that line. This would then be followed by an evaluation `if (newDelegates == 0)`, which would succeed, to finally reach an assert that will verify a remove of that `delegatee` from the user delegates list. However, it will fail since it was never there in the first place.\\n```\\nassert(\\_delegates[delegator].remove(delegatee)); // Should never fail.\\n```\\n\\nAlthough an edge case with no effect on contract state's health, it may happen with front end bugs or incorrect user transactions, and it is best not to have asserts fail, as per the dev comment in that line “// Should never fail”.
Replace `assert()` with a `require()` or verify that the delegatee belongs to the user prior to performing any operations.
null
```\\nfunction \\_undelegate(\\n address delegator,\\n address delegatee,\\n uint256 amount\\n) internal virtual {\\n uint256 newDelegates = \\_delegatesVotesCount[delegator][delegatee] - amount;\\n\\n if (newDelegates == 0) {\\n assert(\\_delegates[delegator].remove(delegatee)); // Should never fail.\\n }\\n```\\n
xTRIBE.emitVotingBalances - DelegateVotesChanged event can be emitted by anyone
medium
`xTRIBE.emitVotingBalances` is an external function without authentication constraints. It means anyone can call it and emit `DelegateVotesChanged` which may impact other layers of code that rely on these events.\\n```\\nfunction emitVotingBalances(address[] calldata accounts) external {\\n uint256 size = accounts.length;\\n\\n for (uint256 i = 0; i < size; ) {\\n emit DelegateVotesChanged(accounts[i], 0, getVotes(accounts[i]));\\n\\n unchecked {\\n i++;\\n }\\n }\\n}\\n```\\n
Consider restricting access to this function for allowed accounts only.
null
```\\nfunction emitVotingBalances(address[] calldata accounts) external {\\n uint256 size = accounts.length;\\n\\n for (uint256 i = 0; i < size; ) {\\n emit DelegateVotesChanged(accounts[i], 0, getVotes(accounts[i]));\\n\\n unchecked {\\n i++;\\n }\\n }\\n}\\n```\\n
Decreasing maxGauges does not account for users' previous gauge list size.
low
`ERC20Gauges` contract has a `maxGauges` state variable meant to represent the maximum amount of gauges a user can allocate to. As per the natspec, it is meant to protect against gas DOS attacks upon token transfer to allow complicated transactions to fit in a block. There is also a function `setMaxGauges` for authorised users to decrease or increase this state variable.\\n```\\nfunction setMaxGauges(uint256 newMax) external requiresAuth {\\n uint256 oldMax = maxGauges;\\n maxGauges = newMax;\\n\\n emit MaxGaugesUpdate(oldMax, newMax);\\n}\\n```\\n\\nHowever, if it is decreased and there are users that have already reached the previous maximum that was larger, there may be unexpected behavior. All of these users' gauges will remain active and manageable, such as have user gauge weights incremented or decremented. So it could be possible that for such a user address `user_address`, numUserGauges(user_address) > `maxGauges`. While in the current contract logic this does not cause issues, `maxGauges` is a public variable that may be used by other systems. If unaccounted for, this discrepancy between the contract's `maxGauges` and the users' actual number of gauges given by `numUserGauges()` could, for example, cause gauges to be skipped or fail loops bounded by `maxGauges` in other systems' logic that try and go through all user gauges.
Either document the potential discrepancy between the user gauges size and the `maxGauges` state variable, or limit `maxGauges` to be only called within the contract thereby forcing other contracts to retrieve user gauge list size through `numUserGauges()`.
null
```\\nfunction setMaxGauges(uint256 newMax) external requiresAuth {\\n uint256 oldMax = maxGauges;\\n maxGauges = newMax;\\n\\n emit MaxGaugesUpdate(oldMax, newMax);\\n}\\n```\\n
Accounts that claim incentives immediately before the migration will be stuck
medium
For accounts that existed before the migration to the new incentive calculation, the following happens when they claim incentives for the first time after the migration: First, the incentives that are still owed from before the migration are computed according to the old formula; the incentives since the migration are calculated according to the new logic, and the two values are added together. The first part - calculating the pre-migration incentives according to the old formula - happens in function MigrateIncentives.migrateAccountFromPreviousCalculation; the following lines are of particular interest in the current context:\\n```\\nuint256 timeSinceMigration = finalMigrationTime - lastClaimTime;\\n\\n// (timeSinceMigration \\* INTERNAL\\_TOKEN\\_PRECISION \\* finalEmissionRatePerYear) / YEAR\\nuint256 incentiveRate =\\n timeSinceMigration\\n .mul(uint256(Constants.INTERNAL\\_TOKEN\\_PRECISION))\\n // Migration emission rate is stored as is, denominated in whole tokens\\n .mul(finalEmissionRatePerYear).mul(uint256(Constants.INTERNAL\\_TOKEN\\_PRECISION))\\n .div(Constants.YEAR);\\n\\n// Returns the average supply using the integral of the total supply.\\nuint256 avgTotalSupply = finalTotalIntegralSupply.sub(lastClaimIntegralSupply).div(timeSinceMigration);\\n```\\n\\nThe division in the last line will throw if `finalMigrationTime` and `lastClaimTime` are equal. This will happen if an account claims incentives immediately before the migration happens - where “immediately” means in the same block. In such a case, the account will be stuck as any attempt to claim incentives will revert.
The function should return `0` if `finalMigrationTime` and `lastClaimTime` are equal. Moreover, the variable name `timeSinceMigration` is misleading, as the variable doesn't store the time since the migration but the time between the last incentive claim and the migration.
null
```\\nuint256 timeSinceMigration = finalMigrationTime - lastClaimTime;\\n\\n// (timeSinceMigration \\* INTERNAL\\_TOKEN\\_PRECISION \\* finalEmissionRatePerYear) / YEAR\\nuint256 incentiveRate =\\n timeSinceMigration\\n .mul(uint256(Constants.INTERNAL\\_TOKEN\\_PRECISION))\\n // Migration emission rate is stored as is, denominated in whole tokens\\n .mul(finalEmissionRatePerYear).mul(uint256(Constants.INTERNAL\\_TOKEN\\_PRECISION))\\n .div(Constants.YEAR);\\n\\n// Returns the average supply using the integral of the total supply.\\nuint256 avgTotalSupply = finalTotalIntegralSupply.sub(lastClaimIntegralSupply).div(timeSinceMigration);\\n```\\n
type(T).max is inclusive
low
Throughout the codebase, there are checks whether a number can be represented by a certain type.\\n```\\nrequire(accumulatedNOTEPerNToken < type(uint128).max); // dev: accumulated NOTE overflow\\n```\\n\\n```\\nrequire(blockTime < type(uint32).max); // dev: block time overflow\\n```\\n\\n```\\nrequire(totalSupply <= type(uint96).max);\\nrequire(blockTime <= type(uint32).max);\\n```\\n\\nSometimes these checks use `<=`, sometimes they use `<`.
`type(T).max` is inclusive, i.e., it is the greatest number that can be represented with type `T`. Strictly speaking, it can and should therefore be used consistently with `<=` instead of `<`.
null
```\\nrequire(accumulatedNOTEPerNToken < type(uint128).max); // dev: accumulated NOTE overflow\\n```\\n
FlasherFTM - Unsolicited invocation of the callback (CREAM auth bypass)
high
TL;DR: Anyone can call `ICTokenFlashloan(crToken).flashLoan(address(FlasherFTM), address(FlasherFTM), info.amount, params)` directly and pass validation checks in `onFlashLoan()`. This call forces it to accept unsolicited flash loans and execute the actions provided under the attacker's `FlashLoan.Info`.\\n`receiver.onFlashLoan(initiator, token, amount, ...)` is called when receiving a flash loan. According to EIP-3156, the `initiator` is `msg.sender` so that one can use it to check if the call to `receiver.onFlashLoan()` was unsolicited or not.\\nThird-party Flash Loan provider contracts are often upgradeable.\\nFor example, the Geist lending contract configured with this system is upgradeable. Upgradeable contracts bear the risk that one cannot assume that the contract is always running the same code. In the worst case, for example, a malicious proxy admin (leaked keys, insider, …) could upgrade the contract and perform unsolicited calls with arbitrary data to Flash Loan consumers in an attempt to exploit them. It, therefore, is highly recommended to verify that flash loan callbacks in the system can only be called if the contract was calling out to the provider to provide a Flash Loan and that the conditions of the flash loan (returned data, amount) are correct.\\nNot all Flash Loan providers implement EIP-3156 correctly.\\nCream Finance, for example, allows users to set an arbitrary `initiator` when requesting a flash loan. This deviates from EIP-3156 and was reported to the Cream development team as a security issue. Hence, anyone can spoof that `initiator` and potentially bypass authentication checks in the consumers' `receiver.onFlashLoan()`. Depending on the third-party application consuming the flash loan is doing with the funds, the impact might range from medium to critical with funds at risk. For example, projects might assume that the flash loan always originates from their trusted components, e.g., because they use them to refinance switching funds between pools or protocols.\\nThe `FlasherFTM` contract assumes that flash loans for the Flasher can only be initiated by authorized callers (isAuthorized) - for a reason - because it is vital that the `FlashLoan.Info calldata info` parameter only contains trusted data:\\n```\\n/\\*\\*\\n \\* @dev Routing Function for Flashloan Provider\\n \\* @param info: struct information for flashLoan\\n \\* @param \\_flashnum: integer identifier of flashloan provider\\n \\*/\\nfunction initiateFlashloan(FlashLoan.Info calldata info, uint8 \\_flashnum) external isAuthorized override {\\n if (\\_flashnum == 0) {\\n \\_initiateGeistFlashLoan(info);\\n } else if (\\_flashnum == 2) {\\n \\_initiateCreamFlashLoan(info);\\n } else {\\n revert(Errors.VL\\_INVALID\\_FLASH\\_NUMBER);\\n }\\n}\\n```\\n\\n```\\nmodifier isAuthorized() {\\n require(\\n msg.sender == \\_fujiAdmin.getController() ||\\n msg.sender == \\_fujiAdmin.getFliquidator() ||\\n msg.sender == owner(),\\n Errors.VL\\_NOT\\_AUTHORIZED\\n );\\n \\_;\\n}\\n```\\n\\nThe Cream Flash Loan initiation code requests the flash loan via ICTokenFlashloan(crToken).flashLoan(receiver=address(this), initiator=address(this), ...):\\n```\\n/\\*\\*\\n \\* @dev Initiates an CreamFinance flashloan.\\n \\* @param info: data to be passed between functions executing flashloan logic\\n \\*/\\nfunction \\_initiateCreamFlashLoan(FlashLoan.Info calldata info) internal {\\n address crToken = info.asset == \\_FTM\\n ? 0xd528697008aC67A21818751A5e3c58C8daE54696\\n : \\_crMappings.addressMapping(info.asset);\\n\\n // Prepara data for flashloan execution\\n bytes memory params = abi.encode(info);\\n\\n // Initialize Instance of Cream crLendingContract\\n ICTokenFlashloan(crToken).flashLoan(address(this), address(this), info.amount, params);\\n}\\n```\\n\\nNote: The Cream implementation does not send `sender=msg.sender` to the `onFlashLoan()` callback - like any other flash loan provider does and EIP-3156 suggests - but uses the value that was passed in as `initiator` when requesting the callback. This detail completely undermines the authentication checks implemented in `onFlashLoan` as the `sender` value cannot be trusted.\\n```\\naddress initiator,\\n```\\n\\n```\\n \\*/\\nfunction onFlashLoan(\\n address sender,\\n address underlying,\\n uint256 amount,\\n uint256 fee,\\n bytes calldata params\\n) external override returns (bytes32) {\\n // Check Msg. Sender is crToken Lending Contract\\n // from IronBank because ETH on Cream cannot perform a flashloan\\n address crToken = underlying == \\_WFTM\\n ? 0xd528697008aC67A21818751A5e3c58C8daE54696\\n : \\_crMappings.addressMapping(underlying);\\n require(msg.sender == crToken && address(this) == sender, Errors.VL\\_NOT\\_AUTHORIZED);\\n```\\n
Cream Finance\\nWe've reached out to the Cream developer team, who have confirmed the issue. They are planning to implement countermeasures. Our recommendation can be summarized as follows:\\nImplement the EIP-3156 compliant version of flashLoan() with initiator hardcoded to `msg.sender`.\\nFujiDAO (and other flash loan consumers)\\nWe recommend not assuming that `FlashLoan.Info` contains trusted or even validated data when a third-party flash loan provider provides it! Developers should ensure that the data received was provided when the flash loan was requested.\\nThe contract should reject unsolicited flash loans. In the scenario where a flash loan provider is exploited, the risk of an exploited trust relationship is less likely to spread to the rest of the system.\\nThe Cream `initiator` provided to the `onFlashLoan()` callback cannot be trusted until the Cream developers fix this issue. The `initiator` can easily be spoofed to perform unsolicited flash loans. We, therefore, suggest:\\nValidate that the `initiator` value is the `flashLoan()` caller. This conforms to the standard and is hopefully how the Cream team is fixing this, and\\nEnsure the implementation tracks its own calls to `flashLoan()` in a state-variable semaphore, i.e. store the flash loan data/hash in a temporary state-variable that is only set just before calling `flashLoan()` until being called back in `onFlashLoan()`. The received data can then be verified against the stored artifact. This is a safe way of authenticating and verifying callbacks.\\nValues received from untrusted third parties should always be validated with the utmost scrutiny.\\nSmart contract upgrades are risky, so we recommend implementing the means to pause certain flash loan providers.\\nEnsure that flash loan handler functions should never re-enter the system. This provides additional security guarantees in case a flash loan provider gets breached.\\nNote: The Fuji development team implemented a hotfix to prevent unsolicited calls from Cream by storing the `hash(FlashLoan.info)` in a state variable just before requesting the flash loan. Inside the `onFlashLoan` callback, this state is validated and cleared accordingly.\\nAn improvement to this hotfix would be, to check `_paramsHash` before any external calls are made and clear it right after validation at the beginning of the function. Additionally, `hash==0x0` should be explicitly disallowed. By doing so, the check also serves as a reentrancy guard and helps further reduce the risk of a potentially malicious flash loan re-entering the function.
null
```\\n/\\*\\*\\n \\* @dev Routing Function for Flashloan Provider\\n \\* @param info: struct information for flashLoan\\n \\* @param \\_flashnum: integer identifier of flashloan provider\\n \\*/\\nfunction initiateFlashloan(FlashLoan.Info calldata info, uint8 \\_flashnum) external isAuthorized override {\\n if (\\_flashnum == 0) {\\n \\_initiateGeistFlashLoan(info);\\n } else if (\\_flashnum == 2) {\\n \\_initiateCreamFlashLoan(info);\\n } else {\\n revert(Errors.VL\\_INVALID\\_FLASH\\_NUMBER);\\n }\\n}\\n```\\n
Lack of reentrancy protection in token interactions
high
Token operations may potentially re-enter the system. For example, `univTransfer` may perform a low-level `to.call{value}()` and, depending on the token's specification (e.g. `ERC-20` extension or `ERC-20` compliant ERC-777), `token` may implement callbacks when being called as `token.safeTransfer(to, amount)` (or token.transfer*()).\\nTherefore, it is crucial to strictly adhere to the checks-effects pattern and safeguard affected methods using a mutex.\\n```\\nfunction univTransfer(\\n IERC20 token,\\n address payable to,\\n uint256 amount\\n) internal {\\n if (amount > 0) {\\n if (isFTM(token)) {\\n (bool sent, ) = to.call{ value: amount }("");\\n require(sent, "Failed to send Ether");\\n } else {\\n token.safeTransfer(to, amount);\\n }\\n }\\n}\\n```\\n\\n`withdraw` is `nonReentrant` while `paybackAndWithdraw` is not, which appears to be inconsistent\\n```\\n/\\*\\*\\n \\* @dev Paybacks the underlying asset and withdraws collateral in a single function call from activeProvider\\n \\* @param \\_paybackAmount: amount of underlying asset to be payback, pass -1 to pay full amount\\n \\* @param \\_collateralAmount: amount of collateral to be withdrawn, pass -1 to withdraw maximum amount\\n \\*/\\nfunction paybackAndWithdraw(int256 \\_paybackAmount, int256 \\_collateralAmount) external payable {\\n updateF1155Balances();\\n \\_internalPayback(\\_paybackAmount);\\n \\_internalWithdraw(\\_collateralAmount);\\n}\\n```\\n\\n```\\n/\\*\\*\\n \\* @dev Paybacks Vault's type underlying to activeProvider - called by users\\n \\* @param \\_repayAmount: token amount of underlying to repay, or\\n \\* pass any 'negative number' to repay full ammount\\n \\* Emits a {Repay} event.\\n \\*/\\nfunction payback(int256 \\_repayAmount) public payable override {\\n updateF1155Balances();\\n \\_internalPayback(\\_repayAmount);\\n}\\n```\\n\\n`depositAndBorrow` is not `nonReentrant` while `borrow()` is which appears to be inconsistent\\n```\\n/\\*\\*\\n \\* @dev Deposits collateral and borrows underlying in a single function call from activeProvider\\n \\* @param \\_collateralAmount: amount to be deposited\\n \\* @param \\_borrowAmount: amount to be borrowed\\n \\*/\\nfunction depositAndBorrow(uint256 \\_collateralAmount, uint256 \\_borrowAmount) external payable {\\n updateF1155Balances();\\n \\_internalDeposit(\\_collateralAmount);\\n \\_internalBorrow(\\_borrowAmount);\\n}\\n```\\n\\n```\\n/\\*\\*\\n \\* @dev Borrows Vault's type underlying amount from activeProvider\\n \\* @param \\_borrowAmount: token amount of underlying to borrow\\n \\* Emits a {Borrow} event.\\n \\*/\\nfunction borrow(uint256 \\_borrowAmount) public override nonReentrant {\\n updateF1155Balances();\\n \\_internalBorrow(\\_borrowAmount);\\n}\\n```\\n\\nHere's an example call stack for `depositAndBorrow` that outlines how a reentrant `ERC20` token (e.g. ERC777) may call back into `depositAndBorrow` again, `updateBalances` twice in the beginning before tokens are even transferred and then continues to call `internalDeposit`, `internalBorrow`, `internalBorrow` without an update before the 2nd borrow. Note that both `internalDeposit` and `internalBorrow` read indexes that may now be outdated.\\n```\\ndepositAndBorrow\\n updateBalances\\n internalDeposit ->\\n ERC777(collateralAsset).safeTransferFrom() ---> calls back!\\n ---callback:beforeTokenTransfer---->\\n !! depositAndBorrow\\n updateBalances\\n internalDeposit\\n --> ERC777.safeTransferFrom()\\n <--\\n \\_deposit\\n mint\\n internalBorrow\\n mint\\n \\_borrow\\n ERC777(borrowAsset).univTransfer(msg.sender) --> might call back\\n\\n <-------------------------------\\n \\_deposit\\n mint\\n internalBorrow\\n mint\\n \\_borrow \\n --> ERC777(borrowAsset).univTransfer(msg.sender) --> might call back\\n <--\\n```\\n
Consider decorating methods that may call back to untrusted sources (i.e., native token transfers, callback token operations) as `nonReentrant` and strictly follow the checks-effects pattern for all contracts in the code-base.
null
```\\nfunction univTransfer(\\n IERC20 token,\\n address payable to,\\n uint256 amount\\n) internal {\\n if (amount > 0) {\\n if (isFTM(token)) {\\n (bool sent, ) = to.call{ value: amount }("");\\n require(sent, "Failed to send Ether");\\n } else {\\n token.safeTransfer(to, amount);\\n }\\n }\\n}\\n```\\n
Unchecked Return Values - ICErc20 repayBorrow
high
`ICErc20.repayBorrow` returns a non-zero uint on error. Multiple providers do not check for this error condition and might return `success` even though `repayBorrow` failed, returning an error code.\\nThis can potentially allow a malicious user to call `paybackAndWithdraw()` while not repaying by causing an error in the sub-call to `Compound.repayBorrow()`, which ends up being silently ignored. Due to the missing success condition check, execution continues normally with `_internalWithdraw()`.\\nAlso, see issue 4.5.\\n```\\nfunction repayBorrow(uint256 repayAmount) external returns (uint256);\\n```\\n\\nThe method may return an error due to multiple reasons:\\n```\\nfunction repayBorrowInternal(uint repayAmount) internal nonReentrant returns (uint, uint) {\\n uint error = accrueInterest();\\n if (error != uint(Error.NO\\_ERROR)) {\\n // accrueInterest emits logs on errors, but we still want to log the fact that an attempted borrow failed\\n return (fail(Error(error), FailureInfo.REPAY\\_BORROW\\_ACCRUE\\_INTEREST\\_FAILED), 0);\\n }\\n // repayBorrowFresh emits repay-borrow-specific logs on errors, so we don't need to\\n return repayBorrowFresh(msg.sender, msg.sender, repayAmount);\\n}\\n```\\n\\n```\\nif (allowed != 0) {\\n return (failOpaque(Error.COMPTROLLER\\_REJECTION, FailureInfo.REPAY\\_BORROW\\_COMPTROLLER\\_REJECTION, allowed), 0);\\n}\\n\\n/\\* Verify market's block number equals current block number \\*/\\nif (accrualBlockNumber != getBlockNumber()) {\\n return (fail(Error.MARKET\\_NOT\\_FRESH, FailureInfo.REPAY\\_BORROW\\_FRESHNESS\\_CHECK), 0);\\n}\\n\\nRepayBorrowLocalVars memory vars;\\n\\n/\\* We remember the original borrowerIndex for verification purposes \\*/\\nvars.borrowerIndex = accountBorrows[borrower].interestIndex;\\n\\n/\\* We fetch the amount the borrower owes, with accumulated interest \\*/\\n(vars.mathErr, vars.accountBorrows) = borrowBalanceStoredInternal(borrower);\\nif (vars.mathErr != MathError.NO\\_ERROR) {\\n return (failOpaque(Error.MATH\\_ERROR, FailureInfo.REPAY\\_BORROW\\_ACCUMULATED\\_BALANCE\\_CALCULATION\\_FAILED, uint(vars.mathErr)), 0);\\n}\\n```\\n\\nMultiple providers, here are some examples:\\n```\\n // Check there is enough balance to pay\\n require(erc20token.balanceOf(address(this)) >= \\_amount, "Not-enough-token");\\n erc20token.univApprove(address(cyTokenAddr), \\_amount);\\n cyToken.repayBorrow(\\_amount);\\n}\\n```\\n\\n```\\nrequire(erc20token.balanceOf(address(this)) >= \\_amount, "Not-enough-token");\\nerc20token.univApprove(address(cyTokenAddr), \\_amount);\\ncyToken.repayBorrow(\\_amount);\\n```\\n\\n```\\nif (\\_isETH(\\_asset)) {\\n // Create a reference to the corresponding cToken contract\\n ICEth cToken = ICEth(cTokenAddr);\\n\\n cToken.repayBorrow{ value: msg.value }();\\n} else {\\n // Create reference to the ERC20 contract\\n IERC20 erc20token = IERC20(\\_asset);\\n\\n // Create a reference to the corresponding cToken contract\\n ICErc20 cToken = ICErc20(cTokenAddr);\\n\\n // Check there is enough balance to pay\\n require(erc20token.balanceOf(address(this)) >= \\_amount, "Not-enough-token");\\n erc20token.univApprove(address(cTokenAddr), \\_amount);\\n cToken.repayBorrow(\\_amount);\\n}\\n```\\n
Check for `cyToken.repayBorrow(_amount) != 0` or `Error.NO_ERROR`.
null
```\\nfunction repayBorrow(uint256 repayAmount) external returns (uint256);\\n```\\n
Unchecked Return Values - IComptroller exitMarket, enterMarket
high
`IComptroller.exitMarket()`, `IComptroller.enterMarkets()` may return a non-zero uint on error but none of the Providers check for this error condition. Together with issue 4.10, this might suggest that unchecked return values may be a systemic problem.\\nHere's the upstream implementation:\\n```\\nif (amountOwed != 0) {\\n return fail(Error.NONZERO\\_BORROW\\_BALANCE, FailureInfo.EXIT\\_MARKET\\_BALANCE\\_OWED);\\n}\\n\\n/\\* Fail if the sender is not permitted to redeem all of their tokens \\*/\\nuint allowed = redeemAllowedInternal(cTokenAddress, msg.sender, tokensHeld);\\nif (allowed != 0) {\\n return failOpaque(Error.REJECTION, FailureInfo.EXIT\\_MARKET\\_REJECTION, allowed);\\n}\\n```\\n\\n```\\n /\\*\\*\\n \\* @notice Removes asset from sender's account liquidity calculation\\n \\* @dev Sender must not have an outstanding borrow balance in the asset,\\n \\* or be providing necessary collateral for an outstanding borrow.\\n \\* @param cTokenAddress The address of the asset to be removed\\n \\* @return Whether or not the account successfully exited the market\\n \\*/\\n function exitMarket(address cTokenAddress) external returns (uint) {\\n CToken cToken = CToken(cTokenAddress);\\n /\\* Get sender tokensHeld and amountOwed underlying from the cToken \\*/\\n (uint oErr, uint tokensHeld, uint amountOwed, ) = cToken.getAccountSnapshot(msg.sender);\\n require(oErr == 0, "exitMarket: getAccountSnapshot failed"); // semi-opaque error code\\n\\n /\\* Fail if the sender has a borrow balance \\*/\\n if (amountOwed != 0) {\\n return fail(Error.NONZERO\\_BORROW\\_BALANCE, FailureInfo.EXIT\\_MARKET\\_BALANCE\\_OWED);\\n }\\n\\n /\\* Fail if the sender is not permitted to redeem all of their tokens \\*/\\n uint allowed = redeemAllowedInternal(cTokenAddress, msg.sender, tokensHeld);\\n if (allowed != 0) {\\n return failOpaque(Error.REJECTION, FailureInfo.EXIT\\_MARKET\\_REJECTION, allowed);\\n }\\n```\\n\\nUnchecked return value `exitMarket`\\nAll Providers exhibit the same issue, probably due to code reuse. (also see https://github.com/ConsenSysDiligence/fuji-protocol-audit-2022-02/issues/19). Some examples:\\n```\\nfunction \\_exitCollatMarket(address \\_cyTokenAddress) internal {\\n // Create a reference to the corresponding network Comptroller\\n IComptroller comptroller = IComptroller(\\_getComptrollerAddress());\\n\\n comptroller.exitMarket(\\_cyTokenAddress);\\n}\\n```\\n\\n```\\nfunction \\_exitCollatMarket(address \\_cyTokenAddress) internal {\\n // Create a reference to the corresponding network Comptroller\\n IComptroller comptroller = IComptroller(\\_getComptrollerAddress());\\n\\n comptroller.exitMarket(\\_cyTokenAddress);\\n}\\n```\\n\\n```\\nfunction \\_exitCollatMarket(address \\_cTokenAddress) internal {\\n // Create a reference to the corresponding network Comptroller\\n IComptroller comptroller = IComptroller(\\_getComptrollerAddress());\\n\\n comptroller.exitMarket(\\_cTokenAddress);\\n}\\n```\\n\\n```\\nfunction \\_exitCollatMarket(address \\_cyTokenAddress) internal {\\n // Create a reference to the corresponding network Comptroller\\n IComptroller comptroller = IComptroller(\\_getComptrollerAddress());\\n\\n comptroller.exitMarket(\\_cyTokenAddress);\\n}\\n```\\n\\nUnchecked return value `enterMarkets` (Note that `IComptroller` returns `NO_ERROR` when already joined to `enterMarkets`.\\nAll Providers exhibit the same issue, probably due to code reuse. (also see https://github.com/ConsenSysDiligence/fuji-protocol-audit-2022-02/issues/19). For example:\\n```\\nfunction \\_enterCollatMarket(address \\_cyTokenAddress) internal {\\n // Create a reference to the corresponding network Comptroller\\n IComptroller comptroller = IComptroller(\\_getComptrollerAddress());\\n\\n address[] memory cyTokenMarkets = new address[](1);\\n cyTokenMarkets[0] = \\_cyTokenAddress;\\n comptroller.enterMarkets(cyTokenMarkets);\\n}\\n```\\n
Require that return value is `ERROR.NO_ERROR` or `0`.
null
```\\nif (amountOwed != 0) {\\n return fail(Error.NONZERO\\_BORROW\\_BALANCE, FailureInfo.EXIT\\_MARKET\\_BALANCE\\_OWED);\\n}\\n\\n/\\* Fail if the sender is not permitted to redeem all of their tokens \\*/\\nuint allowed = redeemAllowedInternal(cTokenAddress, msg.sender, tokensHeld);\\nif (allowed != 0) {\\n return failOpaque(Error.REJECTION, FailureInfo.EXIT\\_MARKET\\_REJECTION, allowed);\\n}\\n```\\n
Fliquidator - excess funds of native tokens are not returned
medium
`FliquidatorFTM.batchLiquidate` accepts the `FTM` native token and checks if at least an amount of `debtTotal` was provided with the call. The function continues using the `debtTotal` value. If a caller provides msg.value > `debtTotal`, excess funds are not returned and remain in the contract. `FliquidatorFTM` is not upgradeable, and there is no way to recover the surplus funds.\\n```\\nif (vAssets.borrowAsset == FTM) {\\n require(msg.value >= debtTotal, Errors.VL\\_AMOUNT\\_ERROR);\\n} else {\\n```\\n
Consider returning excess funds. Consider making `_constructParams` public to allow the caller to pre-calculate the `debtTotal` that needs to be provided with the call.\\nConsider removing support for native token `FTM` entirely to reduce the overall code complexity. The wrapped equivalent can be used instead.
null
```\\nif (vAssets.borrowAsset == FTM) {\\n require(msg.value >= debtTotal, Errors.VL\\_AMOUNT\\_ERROR);\\n} else {\\n```\\n
Unsafe arithmetic casts
medium
The reason for using signed integers in some situations appears to be to use negative values as an indicator to withdraw everything. Using a whole bit of uint256 for this is quite a lot when using `type(uint256).max` would equal or better serve as a flag to withdraw everything.\\nFurthermore, even though the code uses `solidity 0.8.x`, which safeguards arithmetic operations against under/overflows, arithmetic typecast is not protected.\\nAlso, see issue 4.9 for a related issue.\\n```\\n⇒ solidity-shell\\n\\n🚀 Entering interactive Solidity ^0.8.11 shell. '.help' and '.exit' are your friends.\\n » ℹ️ ganache-mgr: starting temp. ganache instance // rest of code\\n » uint(int(-100))\\n115792089237316195423570985008687907853269984665640564039457584007913129639836\\n » int256(uint(2\\*\\*256-100))\\n-100\\n```\\n\\n```\\n// Compute how much collateral needs to be swapt\\nuint256 collateralInPlay = \\_getCollateralInPlay(\\n vAssets.collateralAsset,\\n vAssets.borrowAsset,\\n debtTotal + bonus\\n);\\n\\n// Burn f1155\\n\\_burnMulti(addrs, borrowBals, vAssets, \\_vault, f1155);\\n\\n// Withdraw collateral\\nIVault(\\_vault).withdrawLiq(int256(collateralInPlay));\\n```\\n\\n```\\n// Compute how much collateral needs to be swapt for all liquidated users\\nuint256 collateralInPlay = \\_getCollateralInPlay(\\n vAssets.collateralAsset,\\n vAssets.borrowAsset,\\n \\_amount + \\_flashloanFee + bonus\\n);\\n\\n// Burn f1155\\n\\_burnMulti(\\_addrs, \\_borrowBals, vAssets, \\_vault, f1155);\\n\\n// Withdraw collateral\\nIVault(\\_vault).withdrawLiq(int256(collateralInPlay));\\n```\\n\\n```\\nuint256 amount = \\_amount < 0 ? debtTotal : uint256(\\_amount);\\n```\\n\\n```\\nfunction withdrawLiq(int256 \\_withdrawAmount) external override nonReentrant onlyFliquidator {\\n // Logic used when called by Fliquidator\\n \\_withdraw(uint256(\\_withdrawAmount), address(activeProvider));\\n IERC20Upgradeable(vAssets.collateralAsset).univTransfer(\\n payable(msg.sender),\\n uint256(\\_withdrawAmount)\\n );\\n}\\n```\\n\\npot. unsafe truncation (unlikely)\\n```\\nfunction updateState(uint256 \\_assetID, uint256 newBalance) external override onlyPermit {\\n uint256 total = totalSupply(\\_assetID);\\n if (newBalance > 0 && total > 0 && newBalance > total) {\\n uint256 newIndex = (indexes[\\_assetID] \\* newBalance) / total;\\n indexes[\\_assetID] = uint128(newIndex);\\n }\\n}\\n```\\n
If negative values are only used as a flag to indicate that all funds should be used for an operation, use `type(uint256).max` instead. It is wasting less value-space for a simple flag than using the uint256 high-bit range. Avoid typecast where possible. Use `SafeCast` instead or verify that the casts are safe because the values they operate on cannot under- or overflow. Add inline code comments if that's the case.
null
```\\n⇒ solidity-shell\\n\\n🚀 Entering interactive Solidity ^0.8.11 shell. '.help' and '.exit' are your friends.\\n » ℹ️ ganache-mgr: starting temp. ganache instance // rest of code\\n » uint(int(-100))\\n115792089237316195423570985008687907853269984665640564039457584007913129639836\\n » int256(uint(2\\*\\*256-100))\\n-100\\n```\\n
Missing input validation on flash close fee factors
medium
The `FliquidatorFTM` contract allows authorized parties to set the flash close fee factor. The factor is provided as two integers denoting numerator and denominator. Due to a lack of boundary checks, it is possible to set unrealistically high factors, which go well above 1. This can have unexpected effects on internal accounting and the impact of flashloan balances.\\n```\\nfunction setFlashCloseFee(uint64 \\_newFactorA, uint64 \\_newFactorB) external isAuthorized {\\n flashCloseF.a = \\_newFactorA;\\n flashCloseF.b = \\_newFactorB;\\n```\\n
Add a requirement making sure that `flashCloseF.a <= flashCloseF.b`.
null
```\\nfunction setFlashCloseFee(uint64 \\_newFactorA, uint64 \\_newFactorB) external isAuthorized {\\n flashCloseF.a = \\_newFactorA;\\n flashCloseF.b = \\_newFactorB;\\n```\\n
Separation of concerns and consistency in vaults
medium
The `FujiVaultFTM` contract contains multiple balance-changing functions. Most notably, `withdraw` is passed an `int256` denoted amount parameter. Negative values of this parameter are given to the `_internalWithdraw` function, where they trigger the withdrawal of all collateral. This approach can result in accounting mistakes in the future as beyond a certain point in the vault's accounting; amounts are expected to be only positive. Furthermore, the concerns of withdrawing and entirely withdrawing are not separated.\\nThe above issue applies analogously to the `payback` function and its dependency on `_internalPayback`.\\nFor consistency, `withdrawLiq` also takes an `int256` amount parameter. This function is only accessible to the `Fliquidator` contract and withdraws collateral from the active provider. However, all occurrences of the `_withdrawAmount` parameter are cast to `uint256`.\\nThe `withdraw` entry point:\\n```\\nfunction withdraw(int256 \\_withdrawAmount) public override nonReentrant {\\n updateF1155Balances();\\n \\_internalWithdraw(\\_withdrawAmount);\\n}\\n```\\n\\n_internalWithdraw's negative amount check:\\n```\\nuint256 amountToWithdraw = \\_withdrawAmount < 0\\n ? providedCollateral - neededCollateral\\n : uint256(\\_withdrawAmount);\\n```\\n\\nThe `withdrawLiq` entry point for the Fliquidator:\\n```\\nfunction withdrawLiq(int256 \\_withdrawAmount) external override nonReentrant onlyFliquidator {\\n // Logic used when called by Fliquidator\\n \\_withdraw(uint256(\\_withdrawAmount), address(activeProvider));\\n IERC20Upgradeable(vAssets.collateralAsset).univTransfer(\\n payable(msg.sender),\\n uint256(\\_withdrawAmount)\\n );\\n}\\n```\\n
We recommend splitting the `withdraw(int256)` function into two: `withdraw(uint256)` and `withdrawAll()`. These will provide the same functionality while rendering the updated code of `_internalWithdraw` easier to read, maintain, and harder to manipulate. The recommendation applies to `payback` and `_internalPayback`.\\nSimilarly, withdrawLiq's parameter should be a `uint256` to prevent unnecessary casts.
null
```\\nfunction withdraw(int256 \\_withdrawAmount) public override nonReentrant {\\n updateF1155Balances();\\n \\_internalWithdraw(\\_withdrawAmount);\\n}\\n```\\n
Aave/Geist Interface declaration mismatch and unchecked return values
medium
The two lending providers, Geist & Aave, do not seem to be directly affiliated even though one is a fork of the other. However, the interfaces may likely diverge in the future. Using the same interface declaration for both protocols might become problematic with future upgrades to either protocol. The interface declaration does not seem to come from the original upstream project. The interface `IAaveLendingPool` does not declare any return values while some of the functions called in Geist or Aave return them.\\nNote: that we have not verified all interfaces for correctness. However, we urge the client to only use official interface declarations from the upstream projects and verify that all other interfaces match.\\nThe `ILendingPool` configured in `ProviderAave` (0xB53C1a33016B2DC2fF3653530bfF1848a515c8c5 -> implementation: 0xc6845a5c768bf8d7681249f8927877efda425baf)\\n```\\nfunction \\_getAaveProvider() internal pure returns (IAaveLendingPoolProvider) {\\n return IAaveLendingPoolProvider(0xB53C1a33016B2DC2fF3653530bfF1848a515c8c5);\\n}\\n```\\n\\nThe `IAaveLendingPool` does not declare return values for any function, while upstream does.\\n```\\n// SPDX-License-Identifier: MIT\\n\\npragma solidity ^0.8.0;\\n\\ninterface IAaveLendingPool {\\n function flashLoan(\\n address receiverAddress,\\n address[] calldata assets,\\n uint256[] calldata amounts,\\n uint256[] calldata modes,\\n address onBehalfOf,\\n bytes calldata params,\\n uint16 referralCode\\n ) external;\\n\\n function deposit(\\n address \\_asset,\\n uint256 \\_amount,\\n address \\_onBehalfOf,\\n uint16 \\_referralCode\\n ) external;\\n\\n function withdraw(\\n address \\_asset,\\n uint256 \\_amount,\\n address \\_to\\n ) external;\\n\\n function borrow(\\n address \\_asset,\\n uint256 \\_amount,\\n uint256 \\_interestRateMode,\\n uint16 \\_referralCode,\\n address \\_onBehalfOf\\n ) external;\\n\\n function repay(\\n address \\_asset,\\n uint256 \\_amount,\\n uint256 \\_rateMode,\\n address \\_onBehalfOf\\n ) external;\\n\\n function setUserUseReserveAsCollateral(address \\_asset, bool \\_useAsCollateral) external;\\n}\\n```\\n\\nMethods: `withdraw()`, `repay()` return `uint256` in the original implementation for Aave, see:\\nhttps://etherscan.io/address/0xc6845a5c768bf8d7681249f8927877efda425baf#code\\nThe `ILendingPool` configured for Geist:\\nMethods `withdraw()`, `repay()` return `uint256` in the original implementation for Geist, see:\\nhttps://ftmscan.com/address/0x3104ad2aadb6fe9df166948a5e3a547004862f90#code\\nNote: that the actual `amount` withdrawn does not necessarily need to match the `amount` provided with the function argument. Here's an excerpt of the upstream LendingProvider.withdraw():\\n```\\n// rest of code\\n if (amount == type(uint256).max) {\\n amountToWithdraw = userBalance;\\n }\\n// rest of code\\n return amountToWithdraw;\\n```\\n\\nAnd here's the code in Fuji that calls that method. This will break the `withdrawAll` functionality of `LendingProvider` if token `isFTM`.\\n```\\nfunction withdraw(address \\_asset, uint256 \\_amount) external payable override {\\n IAaveLendingPool aave = IAaveLendingPool(\\_getAaveProvider().getLendingPool());\\n\\n bool isFtm = \\_asset == \\_getFtmAddr();\\n address \\_tokenAddr = isFtm ? \\_getWftmAddr() : \\_asset;\\n\\n aave.withdraw(\\_tokenAddr, \\_amount, address(this));\\n\\n // convert WFTM to FTM\\n if (isFtm) {\\n address unwrapper = \\_getUnwrapper();\\n IERC20(\\_tokenAddr).univTransfer(payable(unwrapper), \\_amount);\\n IUnwrapper(unwrapper).withdraw(\\_amount);\\n }\\n}\\n```\\n\\nSimilar for `repay()`, which returns the actual amount repaid.
Always use the original interface unless only a minimal subset of functions is used.\\nUse the original upstream interfaces of the corresponding project (link via the respective npm packages if available).\\nAvoid omitting parts of the function declaration! Especially when it comes to return values.\\nCheck return values. Use the value returned from `withdraw()` AND `repay()`
null
```\\nfunction \\_getAaveProvider() internal pure returns (IAaveLendingPoolProvider) {\\n return IAaveLendingPoolProvider(0xB53C1a33016B2DC2fF3653530bfF1848a515c8c5);\\n}\\n```\\n
Missing slippage protection for rewards swap
medium
In `FujiVaultFTM.harvestRewards` a swap transaction is generated using a call to `SwapperFTM.getSwapTransaction`. In all relevant scenarios, this call uses a minimum output amount of zero, which de-facto deactivates slippage checks. Most values from harvesting rewards can thus be siphoned off by sandwiching such calls.\\n`amountOutMin` is `0`, effectively disabling slippage control in the swap method.\\n```\\ntransaction.data = abi.encodeWithSelector(\\n IUniswapV2Router01.swapExactETHForTokens.selector,\\n 0,\\n path,\\n msg.sender,\\n type(uint256).max\\n);\\n```\\n\\nOnly success required\\n```\\n// Swap rewards -> collateralAsset\\n(success, ) = swapTransaction.to.call{ value: swapTransaction.value }(swapTransaction.data);\\nrequire(success, "failed to swap rewards");\\n```\\n
Use a slippage check such as for liquidator swaps:\\n```\\nrequire(\\n (priceDelta \\* SLIPPAGE\\_LIMIT\\_DENOMINATOR) / priceFromOracle < SLIPPAGE\\_LIMIT\\_NUMERATOR,\\n Errors.VL\\_SWAP\\_SLIPPAGE\\_LIMIT\\_EXCEED\\n);\\n```\\n\\nOr specify a non-zero `amountOutMin` argument in calls to `IUniswapV2Router01.swapExactETHForTokens`.
null
```\\ntransaction.data = abi.encodeWithSelector(\\n IUniswapV2Router01.swapExactETHForTokens.selector,\\n 0,\\n path,\\n msg.sender,\\n type(uint256).max\\n);\\n```\\n
FujiOracle - _getUSDPrice does not detect stale oracle prices; General Oracle Risks
medium
The external Chainlink oracle, which provides index price information to the system, introduces risk inherent to any dependency on third-party data sources. For example, the oracle could fall behind or otherwise fail to be maintained, resulting in outdated data being fed to the index price calculations. Oracle reliance has historically resulted in crippled on-chain systems, and complications that lead to these outcomes can arise from things as simple as network congestion.\\nThis is more extreme in lesser-known tokens with fewer ChainLink Price feeds to update the price frequently.\\nEnsuring that unexpected oracle return values are correctly handled will reduce reliance on off-chain components and increase the resiliency of the smart contract system that depends on them.\\nThe codebase, as is, relies on `chainLinkOracle.latestRoundData()` and does not check the `timestamp` or `answeredIn` round of the returned price.\\nHere's how the oracle is consumed, skipping any fields that would allow checking for stale data:\\n```\\n/\\*\\*\\n \\* @dev Calculates the USD price of asset.\\n \\* @param \\_asset: the asset address.\\n \\* Returns the USD price of the given asset\\n \\*/\\nfunction \\_getUSDPrice(address \\_asset) internal view returns (uint256 price) {\\n require(usdPriceFeeds[\\_asset] != address(0), Errors.ORACLE\\_NONE\\_PRICE\\_FEED);\\n\\n (, int256 latestPrice, , , ) = AggregatorV3Interface(usdPriceFeeds[\\_asset]).latestRoundData();\\n\\n price = uint256(latestPrice);\\n}\\n```\\n\\nHere's the implementation of the v0.6 FluxAggregator Chainlink feed with a note that timestamps should be checked.\\n```\\n\\* @return updatedAt is the timestamp when the round last was updated (i.e.\\n\\* answer was last computed)\\n```\\n
Perform sanity checks on the price returned by the oracle. If the price is older, not within configured limits, revert or handle in other means.\\nThe oracle does not provide any means to remove a potentially broken price-feed (e.g., by updating its address to `address(0)` or by pausing specific feeds or the complete oracle). The only way to pause an oracle right now is to deploy a new oracle contract. Therefore, consider adding minimally invasive functionality to pause the price-feeds if the oracle becomes unreliable.\\nMonitor the oracle data off-chain and intervene if it becomes unreliable.\\nOn-chain, realistically, both `answeredInRound` and `updatedAt` must be checked within acceptable bounds.\\n`answeredInRound == latestRound` - in this case, data may be assumed to be fresh while it might not be because the feed was entirely abandoned by nodes (no one starting a new round). Also, there's a good chance that many feeds won't always be super up-to-date (it might be acceptable to allow a threshold). A strict check might lead to transactions failing (race; e.g., round just timed out).\\n`roundId + threshold >= answeredInRound` - would allow a deviation of threshold rounds. This check alone might still result in stale data to be used if there are no more rounds. Therefore, this should be combined with `updatedAt + threshold >= block.timestamp`.
null
```\\n/\\*\\*\\n \\* @dev Calculates the USD price of asset.\\n \\* @param \\_asset: the asset address.\\n \\* Returns the USD price of the given asset\\n \\*/\\nfunction \\_getUSDPrice(address \\_asset) internal view returns (uint256 price) {\\n require(usdPriceFeeds[\\_asset] != address(0), Errors.ORACLE\\_NONE\\_PRICE\\_FEED);\\n\\n (, int256 latestPrice, , , ) = AggregatorV3Interface(usdPriceFeeds[\\_asset]).latestRoundData();\\n\\n price = uint256(latestPrice);\\n}\\n```\\n
Unclaimed or front-runnable proxy implementations
medium
Various smart contracts in the system require initialization functions to be called. The point when these calls happen is up to the deploying address. Deployment and initialization in one transaction are typically safe, but it can potentially be front-run if the initialization is done in a separate transaction.\\nA frontrunner can call these functions to silently take over the contracts and provide malicious parameters or plant a backdoor during the deployment.\\nLeaving proxy implementations uninitialized further aides potential phishing attacks where users might claim that - just because a contract address is listed in the official documentation/code-repo - a contract is a legitimate component of the system. At the same time, it is ‘only' a proxy implementation that an attacker claimed. For the end-user, it might be hard to distinguish whether this contract is part of the system or was a maliciously appropriated implementation.\\n```\\nfunction initialize(\\n address \\_fujiadmin,\\n address \\_oracle,\\n address \\_collateralAsset,\\n address \\_borrowAsset\\n) external initializer {\\n```\\n\\n`FujiVault` was initialized many days after deployment, and `FujiVault` inherits `VaultBaseUpgradeable`, which exposes a `delegatecall` that can be used to `selfdestruct` the contract's implementation.\\nAnother `FujiVault` was deployed by `deployer` initialized in a 2-step approach that can theoretically silently be front-run.\\ncode/artifacts/250-core.deploy:L2079-L2079\\n```\\n"deployer": "0xb98d4D4e205afF4d4755E9Df19BD0B8BD4e0f148",\\n```\\n\\nTransactions of deployer:\\nhttps://ftmscan.com/txs?a=0xb98d4D4e205afF4d4755E9Df19BD0B8BD4e0f148&p=2\\nThe specific contract was initialized 19 blocks after deployment.\\nhttps://ftmscan.com/address/0x8513c2db99df213887f63300b23c6dd31f1d14b0\\n\\n`FujiAdminFTM` (and others) don't seem to be initialized. (low prior; no risk other than pot. reputational damage)\\ncode/artifacts/250-core.deploy:L1-L7\\n```\\n{\\n "FujiAdmin": {\\n "address": "0xaAb2AAfBFf7419Ff85181d3A846bA9045803dd67",\\n "deployer": "0xb98d4D4e205afF4d4755E9Df19BD0B8BD4e0f148",\\n "abi": [\\n {\\n "anonymous": false,\\n```\\n
It is recommended to use constructors wherever possible to immediately initialize proxy implementations during deploy-time. The code is only run when the implementation is deployed and affects the proxy initializations. If other initialization functions are used, we recommend enforcing deployer access restrictions or a standardized, top-level `initialized` boolean, set to `true` on the first deployment and used to prevent future initialization.\\nUsing constructors and locked-down initialization functions will significantly reduce potential developer errors and the possibility of attackers re-initializing vital system components.
null
```\\nfunction initialize(\\n address \\_fujiadmin,\\n address \\_oracle,\\n address \\_collateralAsset,\\n address \\_borrowAsset\\n) external initializer {\\n```\\n
WFTM - Use of incorrect interface declarations
low
The `WFTMUnwrapper` and various providers utilize the `IWETH` interface declaration for handling funds denoted in `WFTM`. However, the `WETH` and `WFTM` implementations are different. `WFTM` returns `uint256` values to indicate error conditions while the `WETH` contract does not.\\n```\\ncontract WFTMUnwrapper {\\n address constant wftm = 0x21be370D5312f44cB42ce377BC9b8a0cEF1A4C83;\\n\\n receive() external payable {}\\n\\n /\\*\\*\\n \\* @notice Convert WFTM to FTM and transfer to msg.sender\\n \\* @dev msg.sender needs to send WFTM before calling this withdraw\\n \\* @param \\_amount amount to withdraw.\\n \\*/\\n function withdraw(uint256 \\_amount) external {\\n IWETH(wftm).withdraw(\\_amount);\\n (bool sent, ) = msg.sender.call{ value: \\_amount }("");\\n require(sent, "Failed to send FTM");\\n }\\n}\\n```\\n\\nThe `WFTM` contract on Fantom returns an error return value. The error return value cannot be checked when utilizing the `IWETH` interface for `WFTM`. The error return values are never checked throughout the system for `WFTM` operations. This might be intentional to allow `amount=0` on `WETH` to act as a NOOP similar to `WETH`.\\n```\\n// convert FTM to WFTM\\nif (isFtm) IWETH(\\_tokenAddr).deposit{ value: \\_amount }();\\n```\\n\\nAlso see issues: issue 4.4, issue 4.5, issue 4.10
We recommend using the correct interfaces for all contracts instead of partial stubs. Do not modify the original function declarations, e.g., by omitting return value declarations. The codebase should also check return values where possible or explicitly state why values can safely be ignored in inline comments or the function's natspec documentation block.
null
```\\ncontract WFTMUnwrapper {\\n address constant wftm = 0x21be370D5312f44cB42ce377BC9b8a0cEF1A4C83;\\n\\n receive() external payable {}\\n\\n /\\*\\*\\n \\* @notice Convert WFTM to FTM and transfer to msg.sender\\n \\* @dev msg.sender needs to send WFTM before calling this withdraw\\n \\* @param \\_amount amount to withdraw.\\n \\*/\\n function withdraw(uint256 \\_amount) external {\\n IWETH(wftm).withdraw(\\_amount);\\n (bool sent, ) = msg.sender.call{ value: \\_amount }("");\\n require(sent, "Failed to send FTM");\\n }\\n}\\n```\\n
Inconsistent isFTM, isETH checks
low
`LibUniversalERC20FTM.isFTM()` and `LibUniversalERC20.isETH()` identifies native assets by matching against two distinct addresses while some components only check for one.\\nThe same is true for `FTM`.\\n`Flasher` only identifies a native `asset` transfer by matching `asset` against `_ETH = 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE` while `univTransfer()` identifies it using `0x0 || 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE`\\n```\\nfunction callFunction(\\n address sender,\\n Account.Info calldata account,\\n bytes calldata data\\n) external override {\\n require(msg.sender == \\_dydxSoloMargin && sender == address(this), Errors.VL\\_NOT\\_AUTHORIZED);\\n account;\\n\\n FlashLoan.Info memory info = abi.decode(data, (FlashLoan.Info));\\n\\n uint256 \\_value;\\n if (info.asset == \\_ETH) {\\n // Convert WETH to ETH and assign amount to be set as msg.value\\n \\_convertWethToEth(info.amount);\\n \\_value = info.amount;\\n } else {\\n // Transfer to Vault the flashloan Amount\\n // \\_value is 0\\n IERC20(info.asset).univTransfer(payable(info.vault), info.amount);\\n }\\n```\\n\\n`LibUniversalERC20`\\n```\\nlibrary LibUniversalERC20 {\\n using SafeERC20 for IERC20;\\n\\n IERC20 private constant \\_ETH\\_ADDRESS = IERC20(0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE);\\n IERC20 private constant \\_ZERO\\_ADDRESS = IERC20(0x0000000000000000000000000000000000000000);\\n\\n function isETH(IERC20 token) internal pure returns (bool) {\\n return (token == \\_ZERO\\_ADDRESS || token == \\_ETH\\_ADDRESS);\\n }\\n```\\n\\n```\\nfunction univTransfer(\\n IERC20 token,\\n address payable to,\\n uint256 amount\\n) internal {\\n if (amount > 0) {\\n if (isETH(token)) {\\n (bool sent, ) = to.call{ value: amount }("");\\n require(sent, "Failed to send Ether");\\n } else {\\n token.safeTransfer(to, amount);\\n }\\n }\\n}\\n```\\n\\nThere are multiple other instances of this\\n```\\nuint256 \\_value = vAssets.borrowAsset == ETH ? debtTotal : 0;\\n```\\n
Consider using a consistent way to identify native asset transfers (i.e. `ETH`, FTM) by using `LibUniversalERC20.isETH()`. Alternatively, the system can be greatly simplified by expecting WFTM and only working with it. This simplification will remove all special cases where the library must handle non-ERC20 interfaces.
null
```\\nfunction callFunction(\\n address sender,\\n Account.Info calldata account,\\n bytes calldata data\\n) external override {\\n require(msg.sender == \\_dydxSoloMargin && sender == address(this), Errors.VL\\_NOT\\_AUTHORIZED);\\n account;\\n\\n FlashLoan.Info memory info = abi.decode(data, (FlashLoan.Info));\\n\\n uint256 \\_value;\\n if (info.asset == \\_ETH) {\\n // Convert WETH to ETH and assign amount to be set as msg.value\\n \\_convertWethToEth(info.amount);\\n \\_value = info.amount;\\n } else {\\n // Transfer to Vault the flashloan Amount\\n // \\_value is 0\\n IERC20(info.asset).univTransfer(payable(info.vault), info.amount);\\n }\\n```\\n
FujiOracle - setPriceFeed should check asset and priceFeed decimals
low
`getPriceOf()` assumes that all price feeds return prices with identical decimals, but `setPriceFeed` does not enforce this. Potential misconfigurations can have severe effects on the system's internal accounting.\\n```\\n/\\*\\*\\n \\* @dev Sets '\\_priceFeed' address for a '\\_asset'.\\n \\* Can only be called by the contract owner.\\n \\* Emits a {AssetPriceFeedChanged} event.\\n \\*/\\nfunction setPriceFeed(address \\_asset, address \\_priceFeed) public onlyOwner {\\n require(\\_priceFeed != address(0), Errors.VL\\_ZERO\\_ADDR);\\n usdPriceFeeds[\\_asset] = \\_priceFeed;\\n emit AssetPriceFeedChanged(\\_asset, \\_priceFeed);\\n}\\n```\\n
We recommend adding additional checks to detect unexpected changes in assets' properties. Safeguard price feeds by enforcing `priceFeed` == address(0) || priceFeed.decimals() == `8`. This allows the owner to disable a `priceFeed` (setting it to zero) and otherwise ensure that the feed is compatible and indeed returns `8` decimals.
null
```\\n/\\*\\*\\n \\* @dev Sets '\\_priceFeed' address for a '\\_asset'.\\n \\* Can only be called by the contract owner.\\n \\* Emits a {AssetPriceFeedChanged} event.\\n \\*/\\nfunction setPriceFeed(address \\_asset, address \\_priceFeed) public onlyOwner {\\n require(\\_priceFeed != address(0), Errors.VL\\_ZERO\\_ADDR);\\n usdPriceFeeds[\\_asset] = \\_priceFeed;\\n emit AssetPriceFeedChanged(\\_asset, \\_priceFeed);\\n}\\n```\\n
UniProxy.depositSwap - Tokens are not approved before calling Router.exactInput
high
the call to Router.exactInputrequires the sender to pre-approve the tokens. We could not find any reference for that, thus we assume that a call to `UniProxy.depositSwap` will always revert.\\n```\\nrouter = ISwapRouter(\\_router);\\nuint256 amountOut;\\nuint256 swap;\\nif(swapAmount < 0) {\\n //swap token1 for token0\\n\\n swap = uint256(swapAmount \\* -1);\\n IHypervisor(pos).token1().transferFrom(msg.sender, address(this), deposit1+swap);\\n amountOut = router.exactInput(\\n ISwapRouter.ExactInputParams(\\n path,\\n address(this),\\n block.timestamp + swapLife,\\n swap,\\n deposit0\\n )\\n );\\n}\\nelse{\\n //swap token1 for token0\\n swap = uint256(swapAmount);\\n IHypervisor(pos).token0().transferFrom(msg.sender, address(this), deposit0+swap);\\n\\n amountOut = router.exactInput(\\n ISwapRouter.ExactInputParams(\\n path,\\n address(this),\\n block.timestamp + swapLife,\\n swap,\\n deposit1\\n )\\n ); \\n}\\n```\\n
Resolution\\nFixed in GammaStrategies/[email protected]9a7a3dd by deleting the `depositSwap` function.\\nConsider approving the exact amount of input tokens before the swap.
null
```\\nrouter = ISwapRouter(\\_router);\\nuint256 amountOut;\\nuint256 swap;\\nif(swapAmount < 0) {\\n //swap token1 for token0\\n\\n swap = uint256(swapAmount \\* -1);\\n IHypervisor(pos).token1().transferFrom(msg.sender, address(this), deposit1+swap);\\n amountOut = router.exactInput(\\n ISwapRouter.ExactInputParams(\\n path,\\n address(this),\\n block.timestamp + swapLife,\\n swap,\\n deposit0\\n )\\n );\\n}\\nelse{\\n //swap token1 for token0\\n swap = uint256(swapAmount);\\n IHypervisor(pos).token0().transferFrom(msg.sender, address(this), deposit0+swap);\\n\\n amountOut = router.exactInput(\\n ISwapRouter.ExactInputParams(\\n path,\\n address(this),\\n block.timestamp + swapLife,\\n swap,\\n deposit1\\n )\\n ); \\n}\\n```\\n
Uniproxy.depositSwap - _router should not be determined by the caller
high
`Uniproxy.depositSwap` uses `_router` that is determined by the caller, which in turn might inject a “fake” contract, and thus may steal funds stuck in the `UniProxy` contract.\\nThe `UniProxy` contract has certain trust assumptions regarding the router. The router is supposed to return not less than deposit1(or deposit0) amount of tokens but that fact is never checked.\\n```\\nfunction depositSwap(\\n int256 swapAmount, // (-) token1, (+) token0 for token1; amount to swap\\n uint256 deposit0,\\n uint256 deposit1,\\n address to,\\n address from,\\n bytes memory path,\\n address pos,\\n address \\_router\\n) external returns (uint256 shares) {\\n```\\n
Consider removing the `_router` parameter from the function, and instead, use a storage variable that will be initialized in the constructor.
null
```\\nfunction depositSwap(\\n int256 swapAmount, // (-) token1, (+) token0 for token1; amount to swap\\n uint256 deposit0,\\n uint256 deposit1,\\n address to,\\n address from,\\n bytes memory path,\\n address pos,\\n address \\_router\\n) external returns (uint256 shares) {\\n```\\n
Re-entrancy + flash loan attack can invalidate price check
high
The `UniProxy` contract has a price manipulation protection:\\n```\\nif (twapCheck || positions[pos].twapOverride) {\\n // check twap\\n checkPriceChange(\\n pos,\\n (positions[pos].twapOverride ? positions[pos].twapInterval : twapInterval),\\n (positions[pos].twapOverride ? positions[pos].priceThreshold : priceThreshold)\\n );\\n}\\n```\\n\\nBut after that, the tokens are transferred from the user, if the token transfer allows an attacker to hijack the call-flow of the transaction inside, the attacker can manipulate the Uniswap price there, after the check happened. The Hypervisor's `deposit` function itself is vulnerable to the flash-loan attack.
Make sure the price does not change before the `Hypervisor.deposit` call. For example, the token transfers can be made at the beginning of the `UniProxy.deposit` function.
null
```\\nif (twapCheck || positions[pos].twapOverride) {\\n // check twap\\n checkPriceChange(\\n pos,\\n (positions[pos].twapOverride ? positions[pos].twapInterval : twapInterval),\\n (positions[pos].twapOverride ? positions[pos].priceThreshold : priceThreshold)\\n );\\n}\\n```\\n
UniProxy.properDepositRatio - Proper ratio will not prevent liquidity imbalance for all possible scenarios
high
`UniProxy.properDepositRatio` purpose is to be used as a mechanism to prevent liquidity imbalance. The idea is to compare the deposit ratio with the `hypeRatio`, which is the ratio between the tokens held by the `Hypervisor` contract. In practice, however, this function will not prevent a skewed deposit ratio in many cases. `deposit1 / deposit0` might be a huge number, while `10^16 <= depositRatio <= 10^18`, and 10^16 <= `hypeRatio` <= 10^18. Let us consider the case where `hype1 / hype0 >= 10`, that means `hypeRatio = 10^18`, and now if `deposit1 / deposit0` = 10^200 for example, `depositRatio = 10^18`, and the transaction will pass, which is clearly not intended.\\n```\\nfunction properDepositRatio(\\n address pos,\\n uint256 deposit0,\\n uint256 deposit1\\n) public view returns (bool) {\\n (uint256 hype0, uint256 hype1) = IHypervisor(pos).getTotalAmounts();\\n if (IHypervisor(pos).totalSupply() != 0) {\\n uint256 depositRatio = deposit0 == 0 ? 10e18 : deposit1.mul(1e18).div(deposit0);\\n depositRatio = depositRatio > 10e18 ? 10e18 : depositRatio;\\n depositRatio = depositRatio < 10e16 ? 10e16 : depositRatio;\\n uint256 hypeRatio = hype0 == 0 ? 10e18 : hype1.mul(1e18).div(hype0);\\n hypeRatio = hypeRatio > 10e18 ? 10e18 : hypeRatio;\\n hypeRatio = hypeRatio < 10e16 ? 10e16 : hypeRatio;\\n return (FullMath.mulDiv(depositRatio, deltaScale, hypeRatio) < depositDelta &&\\n FullMath.mulDiv(hypeRatio, deltaScale, depositRatio) < depositDelta);\\n }\\n return true;\\n}\\n```\\n
Resolution\\nFixed in GammaStrategies/[email protected]9a7a3dd by deleting the `properDepositRatio` function.\\nConsider removing the cap of [0.1,10] both for `depositRatio` and for `hypeRatio`.
null
```\\nfunction properDepositRatio(\\n address pos,\\n uint256 deposit0,\\n uint256 deposit1\\n) public view returns (bool) {\\n (uint256 hype0, uint256 hype1) = IHypervisor(pos).getTotalAmounts();\\n if (IHypervisor(pos).totalSupply() != 0) {\\n uint256 depositRatio = deposit0 == 0 ? 10e18 : deposit1.mul(1e18).div(deposit0);\\n depositRatio = depositRatio > 10e18 ? 10e18 : depositRatio;\\n depositRatio = depositRatio < 10e16 ? 10e16 : depositRatio;\\n uint256 hypeRatio = hype0 == 0 ? 10e18 : hype1.mul(1e18).div(hype0);\\n hypeRatio = hypeRatio > 10e18 ? 10e18 : hypeRatio;\\n hypeRatio = hypeRatio < 10e16 ? 10e16 : hypeRatio;\\n return (FullMath.mulDiv(depositRatio, deltaScale, hypeRatio) < depositDelta &&\\n FullMath.mulDiv(hypeRatio, deltaScale, depositRatio) < depositDelta);\\n }\\n return true;\\n}\\n```\\n
UniProxy.depositSwap doesn't deposit all the users' funds
medium
When executing the swap, the minimal amount out is passed to the router (deposit1 in this example), but the actual swap amount will be `amountOut`. But after the trade, instead of depositing `amountOut`, the contract tries to deposit `deposit1`, which is lower. This may result in some users' funds staying in the `UniProxy` contract.\\n```\\nelse{\\n //swap token1 for token0\\n swap = uint256(swapAmount);\\n IHypervisor(pos).token0().transferFrom(msg.sender, address(this), deposit0+swap);\\n\\n amountOut = router.exactInput(\\n ISwapRouter.ExactInputParams(\\n path,\\n address(this),\\n block.timestamp + swapLife,\\n swap,\\n deposit1\\n )\\n ); \\n}\\n\\nrequire(amountOut > 0, "Swap failed");\\n\\nif (positions[pos].version < 2) {\\n // requires lp token transfer from proxy to msg.sender\\n shares = IHypervisor(pos).deposit(deposit0, deposit1, address(this));\\n IHypervisor(pos).transfer(to, shares);\\n}\\n```\\n
Resolution\\nFixed in GammaStrategies/[email protected]9a7a3dd by deleting the `depositSwap` function.\\nDeposit all the user's funds to the Hypervisor.
null
```\\nelse{\\n //swap token1 for token0\\n swap = uint256(swapAmount);\\n IHypervisor(pos).token0().transferFrom(msg.sender, address(this), deposit0+swap);\\n\\n amountOut = router.exactInput(\\n ISwapRouter.ExactInputParams(\\n path,\\n address(this),\\n block.timestamp + swapLife,\\n swap,\\n deposit1\\n )\\n ); \\n}\\n\\nrequire(amountOut > 0, "Swap failed");\\n\\nif (positions[pos].version < 2) {\\n // requires lp token transfer from proxy to msg.sender\\n shares = IHypervisor(pos).deposit(deposit0, deposit1, address(this));\\n IHypervisor(pos).transfer(to, shares);\\n}\\n```\\n
Hypervisor - Multiple “sandwiching” front running vectors
medium
The amount of tokens received from `UniswapV3Pool` functions might be manipulated by front-runners due to the decentralized nature of AMMs, where the order of transactions can not be pre-determined. A potential “sandwicher” may insert a buying order before the user's call to `Hypervisor.rebalance` for instance, and a sell order after.\\nMore specifically, calls to `pool.swap`, `pool.mint`, `pool.burn` are susceptible to “sandwiching” vectors.\\n`Hypervisor.rebalance`\\n```\\nif (swapQuantity != 0) {\\n pool.swap(\\n address(this),\\n swapQuantity > 0,\\n swapQuantity > 0 ? swapQuantity : -swapQuantity,\\n swapQuantity > 0 ? TickMath.MIN\\_SQRT\\_RATIO + 1 : TickMath.MAX\\_SQRT\\_RATIO - 1,\\n abi.encode(address(this))\\n );\\n}\\n```\\n\\n```\\nfunction \\_mintLiquidity(\\n int24 tickLower,\\n int24 tickUpper,\\n uint128 liquidity,\\n address payer\\n) internal returns (uint256 amount0, uint256 amount1) {\\n if (liquidity > 0) {\\n (amount0, amount1) = pool.mint(\\n address(this),\\n tickLower,\\n tickUpper,\\n liquidity,\\n abi.encode(payer)\\n );\\n }\\n}\\n```\\n\\n```\\nfunction \\_burnLiquidity(\\n int24 tickLower,\\n int24 tickUpper,\\n uint128 liquidity,\\n address to,\\n bool collectAll\\n) internal returns (uint256 amount0, uint256 amount1) {\\n if (liquidity > 0) {\\n // Burn liquidity\\n (uint256 owed0, uint256 owed1) = pool.burn(tickLower, tickUpper, liquidity);\\n\\n // Collect amount owed\\n uint128 collect0 = collectAll ? type(uint128).max : \\_uint128Safe(owed0);\\n uint128 collect1 = collectAll ? type(uint128).max : \\_uint128Safe(owed1);\\n if (collect0 > 0 || collect1 > 0) {\\n (amount0, amount1) = pool.collect(to, tickLower, tickUpper, collect0, collect1);\\n }\\n }\\n}\\n```\\n
Consider adding an `amountMin` parameter(s) to ensure that at least the `amountMin` of tokens was received.
null
```\\nif (swapQuantity != 0) {\\n pool.swap(\\n address(this),\\n swapQuantity > 0,\\n swapQuantity > 0 ? swapQuantity : -swapQuantity,\\n swapQuantity > 0 ? TickMath.MIN\\_SQRT\\_RATIO + 1 : TickMath.MAX\\_SQRT\\_RATIO - 1,\\n abi.encode(address(this))\\n );\\n}\\n```\\n
Uniswap v3 callbacks access control should be hardened
low
Resolution\\nFixed in GammaStrategies/[email protected]9a7a3dd by implementing the auditor's recommendation for `uniswapV3MintCallback`, and deleting `uniswapV3SwapCallback` and the call to `pool.swap`.\\nUniswap v3 uses a callback pattern to pull funds from the caller. The caller, (in this case Hypervisor) has to implement a callback function which will be called by the Uniswap's `pool`. Both `uniswapV3MintCallback` and `uniswapV3SwapCallback` restrict the access to the callback functions only for the `pool`. However, this alone will not block a random call from the `pool` contract in case the latter was hacked, which will result in stealing all the funds held in `Hypervisor` or of any user that approved the `Hypervisor` contract to transfer tokens on his behalf.\\n```\\nfunction uniswapV3MintCallback(\\n uint256 amount0,\\n uint256 amount1,\\n bytes calldata data\\n) external override {\\n require(msg.sender == address(pool));\\n address payer = abi.decode(data, (address));\\n\\n if (payer == address(this)) {\\n if (amount0 > 0) token0.safeTransfer(msg.sender, amount0);\\n if (amount1 > 0) token1.safeTransfer(msg.sender, amount1);\\n } else {\\n if (amount0 > 0) token0.safeTransferFrom(payer, msg.sender, amount0);\\n if (amount1 > 0) token1.safeTransferFrom(payer, msg.sender, amount1);\\n }\\n}\\n\\nfunction uniswapV3SwapCallback(\\n int256 amount0Delta,\\n int256 amount1Delta,\\n bytes calldata data\\n) external override {\\n require(msg.sender == address(pool));\\n address payer = abi.decode(data, (address));\\n\\n if (amount0Delta > 0) {\\n if (payer == address(this)) {\\n token0.safeTransfer(msg.sender, uint256(amount0Delta));\\n } else {\\n token0.safeTransferFrom(payer, msg.sender, uint256(amount0Delta));\\n }\\n } else if (amount1Delta > 0) {\\n if (payer == address(this)) {\\n token1.safeTransfer(msg.sender, uint256(amount1Delta));\\n } else {\\n token1.safeTransferFrom(payer, msg.sender, uint256(amount1Delta));\\n }\\n }\\n}\\n```\\n
Consider adding (boolean) storage variables that will help to track whether a call to `uniswapV3MintCallback | uniswapV3SwapCallback` was preceded by a call to `_mintLiquidity | rebalance` respectively. An example for the `rebalance` function would be bool `rebalanceCalled`, this variable will be assigned a `true` value in `rebalance` before the external call of `pool.swap`, then `uniswapV3SwapCallback` will require that `rebalanceCalled` == `true`, and then right after `rebalanceCalled` will be assigned a `false` value.
null
```\\nfunction uniswapV3MintCallback(\\n uint256 amount0,\\n uint256 amount1,\\n bytes calldata data\\n) external override {\\n require(msg.sender == address(pool));\\n address payer = abi.decode(data, (address));\\n\\n if (payer == address(this)) {\\n if (amount0 > 0) token0.safeTransfer(msg.sender, amount0);\\n if (amount1 > 0) token1.safeTransfer(msg.sender, amount1);\\n } else {\\n if (amount0 > 0) token0.safeTransferFrom(payer, msg.sender, amount0);\\n if (amount1 > 0) token1.safeTransferFrom(payer, msg.sender, amount1);\\n }\\n}\\n\\nfunction uniswapV3SwapCallback(\\n int256 amount0Delta,\\n int256 amount1Delta,\\n bytes calldata data\\n) external override {\\n require(msg.sender == address(pool));\\n address payer = abi.decode(data, (address));\\n\\n if (amount0Delta > 0) {\\n if (payer == address(this)) {\\n token0.safeTransfer(msg.sender, uint256(amount0Delta));\\n } else {\\n token0.safeTransferFrom(payer, msg.sender, uint256(amount0Delta));\\n }\\n } else if (amount1Delta > 0) {\\n if (payer == address(this)) {\\n token1.safeTransfer(msg.sender, uint256(amount1Delta));\\n } else {\\n token1.safeTransferFrom(payer, msg.sender, uint256(amount1Delta));\\n }\\n }\\n}\\n```\\n
UniProxy.depositSwap doesn't deposit all the users' funds
medium
Resolution\\nFixed in GammaStrategies/[email protected]9a7a3dd by deleting the `depositSwap` function.\\nWhen executing the swap, the minimal amount out is passed to the router (deposit1 in this example), but the actual swap amount will be `amountOut`. But after the trade, instead of depositing `amountOut`, the contract tries to deposit `deposit1`, which is lower. This may result in some users' funds staying in the `UniProxy` contract.\\n```\\nelse{\\n //swap token1 for token0\\n swap = uint256(swapAmount);\\n IHypervisor(pos).token0().transferFrom(msg.sender, address(this), deposit0+swap);\\n\\n amountOut = router.exactInput(\\n ISwapRouter.ExactInputParams(\\n path,\\n address(this),\\n block.timestamp + swapLife,\\n swap,\\n deposit1\\n )\\n ); \\n}\\n\\nrequire(amountOut > 0, "Swap failed");\\n\\nif (positions[pos].version < 2) {\\n // requires lp token transfer from proxy to msg.sender\\n shares = IHypervisor(pos).deposit(deposit0, deposit1, address(this));\\n IHypervisor(pos).transfer(to, shares);\\n}\\n```\\n
Deposit all the user's funds to the Hypervisor.
null
```\\nelse{\\n //swap token1 for token0\\n swap = uint256(swapAmount);\\n IHypervisor(pos).token0().transferFrom(msg.sender, address(this), deposit0+swap);\\n\\n amountOut = router.exactInput(\\n ISwapRouter.ExactInputParams(\\n path,\\n address(this),\\n block.timestamp + swapLife,\\n swap,\\n deposit1\\n )\\n ); \\n}\\n\\nrequire(amountOut > 0, "Swap failed");\\n\\nif (positions[pos].version < 2) {\\n // requires lp token transfer from proxy to msg.sender\\n shares = IHypervisor(pos).deposit(deposit0, deposit1, address(this));\\n IHypervisor(pos).transfer(to, shares);\\n}\\n```\\n
Initialization flaws
low
For non-upgradeable contracts, the Solidity compiler takes care of chaining the constructor calls of an inheritance hierarchy in the right order; for upgradeable contracts, taking care of initialization is a manual task - and with extensive use of inheritance, it is tedious and error-prone. The convention in OpenZeppelin Contracts Upgradeable is to have a `__C_init_unchained` function that contains the actual initialization logic for contract `C` and a `__C_init` function that calls the `*_init_unchained` function for every super-contract - direct and indirect - in the inheritance hierarchy (including C) in the C3-linearized order from most basic to most derived. This pattern imitates what the compiler does for constructors.\\nAll `*_init` functions in the contracts (__ERC20WrapperGluwacoin_init, `__ERC20Reservable_init`, `__ERC20ETHless_init`, and __ERC20Wrapper_init) are missing some `_init_unchained` calls, and sometimes the existing calls are not in the correct order.\\nThe `__ERC20WrapperGluwacoin_init` function is implemented as follows:\\n```\\nfunction \\_\\_ERC20WrapperGluwacoin\\_init(\\n string memory name,\\n string memory symbol,\\n IERC20 token\\n) internal initializer {\\n \\_\\_Context\\_init\\_unchained();\\n \\_\\_ERC20\\_init\\_unchained(name, symbol);\\n \\_\\_ERC20ETHless\\_init\\_unchained();\\n \\_\\_ERC20Reservable\\_init\\_unchained();\\n \\_\\_AccessControlEnumerable\\_init\\_unchained();\\n \\_\\_ERC20Wrapper\\_init\\_unchained(token);\\n \\_\\_ERC20WrapperGluwacoin\\_init\\_unchained();\\n}\\n```\\n\\nAnd the C3 linearization is:\\n```\\nERC20WrapperGluwacoin\\n ↖ ERC20Reservable\\n ↖ ERC20ETHless\\n ↖ ERC20Wrapper\\n ↖ ERC20Upgradeable\\n ↖ IERC20MetadataUpgradeable\\n ↖ IERC20Upgradeable\\n ↖ AccessControlEnumerableUpgradeable\\n ↖ AccessControlUpgradeable\\n ↖ ERC165Upgradeable\\n ↖ IERC165Upgradeable\\n ↖ IAccessControlEnumerableUpgradeable\\n ↖ IAccessControlUpgradeable\\n ↖ ContextUpgradeable\\n ↖ Initializable\\n```\\n\\nThe calls `__ERC165_init_unchained();` and `__AccessControl_init_unchained();` are missing, and `__ERC20Wrapper_init_unchained(token);` should move between `__ERC20_init_unchained(name, symbol);` and `__ERC20ETHless_init_unchained();`.
Review all `*_init` functions, add the missing `*_init_unchained` calls, and fix the order of these calls.
null
```\\nfunction \\_\\_ERC20WrapperGluwacoin\\_init(\\n string memory name,\\n string memory symbol,\\n IERC20 token\\n) internal initializer {\\n \\_\\_Context\\_init\\_unchained();\\n \\_\\_ERC20\\_init\\_unchained(name, symbol);\\n \\_\\_ERC20ETHless\\_init\\_unchained();\\n \\_\\_ERC20Reservable\\_init\\_unchained();\\n \\_\\_AccessControlEnumerable\\_init\\_unchained();\\n \\_\\_ERC20Wrapper\\_init\\_unchained(token);\\n \\_\\_ERC20WrapperGluwacoin\\_init\\_unchained();\\n}\\n```\\n
Flaw in _beforeTokenTransfer call chain and missing tests
low
In OpenZeppelin's ERC-20 implementation, the virtual `_beforeTokenTransfer` function provides a hook that is called before tokens are transferred, minted, or burned. In the Gluwacoin codebase, it is used to check whether the unreserved balance (as opposed to the regular balance, which is checked by the ERC-20 implementation) of the sender is sufficient to allow this transfer or burning.\\nIn `ERC20WrapperGluwacoin`, `ERC20Reservable`, and `ERC20Wrapper`, the `_beforeTokenTransfer` function is implemented in the following way:\\n```\\nfunction \\_beforeTokenTransfer(\\n address from,\\n address to,\\n uint256 amount\\n) internal override(ERC20Upgradeable, ERC20Wrapper, ERC20Reservable) {\\n ERC20Wrapper.\\_beforeTokenTransfer(from, to, amount);\\n ERC20Reservable.\\_beforeTokenTransfer(from, to, amount);\\n}\\n```\\n\\n```\\nfunction \\_beforeTokenTransfer(address from, address to, uint256 amount) internal virtual override (ERC20Upgradeable) {\\n if (from != address(0)) {\\n require(\\_unreservedBalance(from) >= amount, "ERC20Reservable: transfer amount exceeds unreserved balance");\\n }\\n\\n super.\\_beforeTokenTransfer(from, to, amount);\\n}\\n```\\n\\n```\\nfunction \\_beforeTokenTransfer(address from, address to, uint256 amount) internal virtual override (ERC20Upgradeable) {\\n super.\\_beforeTokenTransfer(from, to, amount);\\n}\\n```\\n\\nFinally, the C3-linearization of the contracts is:\\n```\\nERC20WrapperGluwacoin\\n ↖ ERC20Reservable\\n ↖ ERC20ETHless\\n ↖ ERC20Wrapper\\n ↖ ERC20Upgradeable\\n ↖ IERC20MetadataUpgradeable\\n ↖ IERC20Upgradeable\\n ↖ AccessControlEnumerableUpgradeable\\n ↖ AccessControlUpgradeable\\n ↖ ERC165Upgradeable\\n ↖ IERC165Upgradeable\\n ↖ IAccessControlEnumerableUpgradeable\\n ↖ IAccessControlUpgradeable\\n ↖ ContextUpgradeable\\n ↖ Initializable\\n```\\n\\nThis means `ERC20Wrapper._beforeTokenTransfer` is ultimately called twice - once directly in `ERC20WrapperGluwacoin._beforeTokenTransfer` and then a second time because the `super._beforeTokenTransfer` call in `ERC20Reservable._beforeTokenTransfer` resolves to `ERC20Wrapper._beforeTokenTransfer`. (ERC20ETHless doesn't override _beforeTokenTransfer.)\\nMoreover, while reviewing the correctness and coverage of the tests is not in scope for this engagement, we happened to notice that there are no tests that check whether the unreserved balance is sufficient for transferring or burning tokens.
`ERC20WrapperGluwacoin._beforeTokenTransfer` should just call `super._beforeTokenTransfer`. Moreover, the `_beforeTokenTransfer` implementation can be removed from `ERC20Wrapper`.\\nWe would like to stress the importance of careful and comprehensive testing in general and of this functionality in particular, as it is crucial for the system's integrity. We also encourage investigating whether there are more such omissions and an evaluation of the test quality and coverage in general.
null
```\\nfunction \\_beforeTokenTransfer(\\n address from,\\n address to,\\n uint256 amount\\n) internal override(ERC20Upgradeable, ERC20Wrapper, ERC20Reservable) {\\n ERC20Wrapper.\\_beforeTokenTransfer(from, to, amount);\\n ERC20Reservable.\\_beforeTokenTransfer(from, to, amount);\\n}\\n```\\n
Hard-coded decimals
low
The Gluwacoin wrapper token should have the same number of decimals as the wrapped ERC-20. Currently, the number of decimals is hard-coded to 6. This limits flexibility or requires source code changes and recompilation if a token with a different number of decimals is to be wrapped.\\n```\\nfunction decimals() public pure override returns (uint8) {\\n return 6;\\n}\\n```\\n
We recommend supplying the number of `decimals` as an initialization parameter and storing it in a state variable. That increases gas consumption of the `decimals` function, but we doubt this view function will be frequently called from a contract, and even if it was, we think the benefits far outweigh the costs.\\nMoreover, we believe the `decimals` logic (i.e., function `decimals` and the new state variable) should be implemented in the `ERC20Wrapper` contract - which holds the basic ERC-20 functionality of the wrapper token - and not in `ERC20WrapperGluwacoin`, which is the base contract of the entire system.
null
```\\nfunction decimals() public pure override returns (uint8) {\\n return 6;\\n}\\n```\\n
Re-initialization of the Balancer pool is potentially possible
low
Instead of creating a new Balancer pool for an auction every time, the same pool is getting re-used repeatedly. When this happens, the old liquidity is withdrawn, and if there is enough FEI in the contract, the weights are shifted pool is filled with new tokens. If there is not enough FEI, the pool is left empty, and users can still interact with it. When there's enough FEI again, it's re-initialized again, which is not the intention:\\n```\\nuint256 bptTotal = pool.totalSupply();\\nuint256 bptBalance = pool.balanceOf(address(this));\\n\\n// Balancer locks a small amount of bptTotal after init, so 0 bpt means pool needs initializing\\nif (bptTotal == 0) {\\n \\_initializePool();\\n return;\\n}\\n```\\n\\nTheoretically, this will never happen because there should be minimal leftover liquidity tokens after the withdrawal. But we couldn't strictly verify that fact because it requires looking into balancer code much deeper.
One of the options would be only to allow re-using the pool in atomic transactions. So if there are not enough FEI tokens for the next auction, the `swap` transaction reverts. That will help with another issue (issue 3.2) too.
null
```\\nuint256 bptTotal = pool.totalSupply();\\nuint256 bptBalance = pool.balanceOf(address(this));\\n\\n// Balancer locks a small amount of bptTotal after init, so 0 bpt means pool needs initializing\\nif (bptTotal == 0) {\\n \\_initializePool();\\n return;\\n}\\n```\\n
The BalancerLBPSwapper may not have enough Tribe tokens
low
Whenever the `swap` function is called, it should re-initialize the Balancer pool that requires adding liquidity: 99% Fei and 1% Tribe. So the Tribe should initially be in the contract.\\n```\\nfunction \\_getTokensIn(uint256 spentTokenBalance) internal view returns(uint256[] memory amountsIn) {\\n amountsIn = new uint256[](2);\\n\\n uint256 receivedTokenBalance = readOracle().mul(spentTokenBalance).mul(ONE\\_PERCENT).div(NINETY\\_NINE\\_PERCENT).asUint256();\\n\\n if (address(assets[0]) == tokenSpent) {\\n amountsIn[0] = spentTokenBalance;\\n amountsIn[1] = receivedTokenBalance;\\n } else {\\n amountsIn[0] = receivedTokenBalance;\\n amountsIn[1] = spentTokenBalance;\\n }\\n}\\n```\\n\\nAdditionally, when the `swap` is called, and there is not enough FEI to re-initiate the Balancer auction, all the Tribe gets withdrawn. So the next time the `swap` is called, there is no Tribe in the contract again.\\n```\\n// 5. Send remaining tokenReceived to target\\nIERC20(tokenReceived).transfer(tokenReceivingAddress, IERC20(tokenReceived).balanceOf(address(this)));\\n```\\n
Create an automated mechanism that mints/transfers Tribe when it is needed in the swapper contract.
null
```\\nfunction \\_getTokensIn(uint256 spentTokenBalance) internal view returns(uint256[] memory amountsIn) {\\n amountsIn = new uint256[](2);\\n\\n uint256 receivedTokenBalance = readOracle().mul(spentTokenBalance).mul(ONE\\_PERCENT).div(NINETY\\_NINE\\_PERCENT).asUint256();\\n\\n if (address(assets[0]) == tokenSpent) {\\n amountsIn[0] = spentTokenBalance;\\n amountsIn[1] = receivedTokenBalance;\\n } else {\\n amountsIn[0] = receivedTokenBalance;\\n amountsIn[1] = spentTokenBalance;\\n }\\n}\\n```\\n