name
stringlengths 5
231
| severity
stringclasses 3
values | description
stringlengths 107
68.2k
| recommendation
stringlengths 12
8.75k
⌀ | impact
stringlengths 3
11.2k
⌀ | function
stringlengths 15
64.6k
|
---|---|---|---|---|---|
The `_fee()` function is wrongly implemented in the code | medium | _fee() function is wrongly implemented in the code so the protocol will get fewer fees and the trader will earn more\\n```\\n (uint256 unitFee, , ) = _fees(10**decimals(), settlementFeePercentage);\\n amount = (newFee * 10**decimals()) / unitFee;\\n```\\n\\nlet's say we have: `newFee` 100 USDC USDC Decimals is 6 `settlementFeePercentage` is 20% ==> 200\\nThe `unitFee` will be 520_000\\n`amount` = (100 * 1_000_000) / 520_000 `amount` = 192 USDC Which is supposed to be `amount` = 160 USDC | The `_fee()` function needs to calculate the fees in this way\\n```\\ntotal_fee = (5000 * amount)/ (10000 - sf)\\n```\\n | The protocol will earn fees less than expected | ```\\n (uint256 unitFee, , ) = _fees(10**decimals(), settlementFeePercentage);\\n amount = (newFee * 10**decimals()) / unitFee;\\n```\\n |
Bulls that are unable to receive NFTs will not be able to claim them later | medium | A lot of care has been taken to ensure that, if a bull has a contract address that doesn't accept ERC721s, the NFT is saved to `withdrawableCollectionTokenId` for later withdrawal. However, because there is no way to withdraw this token to a different address (and the original address doesn't accept NFTs), it will never be able to be claimed.\\nTo settle a contract, the bear calls `settleContract()`, which sends their NFT to the bull, and withdraws the collateral and premium to the bear.\\n```\\ntry IERC721(order.collection).safeTransferFrom(bear, bull, tokenId) {}\\ncatch (bytes memory) {\\n // Transfer NFT to BvbProtocol\\n IERC721(order.collection).safeTransferFrom(bear, address(this), tokenId);\\n // Store that the bull has to retrieve it\\n withdrawableCollectionTokenId[order.collection][tokenId] = bull;\\n}\\n\\nuint bearAssetAmount = order.premium + order.collateral;\\nif (bearAssetAmount > 0) {\\n // Transfer payment tokens to the Bear\\n IERC20(order.asset).safeTransfer(bear, bearAssetAmount);\\n}\\n```\\n\\nIn order to address the case that the bull is a contract that can't accept NFTs, the protocol uses a try-catch setup. If the transfer doesn't succeed, it transfers the NFT into the contract, and sets `withdrawableCollectionTokenId` so that the specific NFT is attributed to the bull for later withdrawal.\\nHowever, assuming the bull isn't an upgradeable contract, this withdrawal will never be possible, because their only option is to call the same function `safeTransferFrom` to the same contract address, which will fail in the same way.\\n```\\nfunction withdrawToken(bytes32 orderHash, uint tokenId) public {\\n address collection = matchedOrders[uint(orderHash)].collection;\\n\\n address recipient = withdrawableCollectionTokenId[collection][tokenId];\\n\\n // Transfer NFT to recipient\\n IERC721(collection).safeTransferFrom(address(this), recipient, tokenId);\\n\\n // This token is not withdrawable anymore\\n withdrawableCollectionTokenId[collection][tokenId] = address(0);\\n\\n emit WithdrawnToken(orderHash, tokenId, recipient);\\n}\\n```\\n | There are a few possible solutions:\\nAdd a `to` field in the `withdrawToken` function, which allows the bull `to` withdraw the NFT `to` another address\\nCreate a function similar `to` `transferPosition` that can be used `to` transfer owners of a withdrawable NFT\\nDecide that you want `to` punish bulls who aren't able `to` receive NFTs, in which case there is no need `to` save their address or implement a `withdrawToken` function | If a bull is a contract that can't receive NFTs, their orders will be matched, the bear will be able to withdraw their assets, but the bull's NFT will remain stuck in the BVB protocol contract. | ```\\ntry IERC721(order.collection).safeTransferFrom(bear, bull, tokenId) {}\\ncatch (bytes memory) {\\n // Transfer NFT to BvbProtocol\\n IERC721(order.collection).safeTransferFrom(bear, address(this), tokenId);\\n // Store that the bull has to retrieve it\\n withdrawableCollectionTokenId[order.collection][tokenId] = bull;\\n}\\n\\nuint bearAssetAmount = order.premium + order.collateral;\\nif (bearAssetAmount > 0) {\\n // Transfer payment tokens to the Bear\\n IERC20(order.asset).safeTransfer(bear, bearAssetAmount);\\n}\\n```\\n |
Attackers can use `reclaimContract()` to transfer assets in protocol to address(0) | high | `reclaimContract()` would transfer payment tokens to `bulls[contractId]`. An attacker can make `reclaimContract()` transfer assets to address(0).\\nAn attacker can use a fake order to trick `reclaimContract()`. The fake order needs to meet the following requirements:\\n`block.timestamp > order.expiry`.\\n`!settledContracts[contractId]`.\\n`!reclaimedContracts[contractId],`.\\nThe first one is easy to fulfilled, an attacker can decide the content of the fake order. And the others are all satisfied since the fake order couldn't be settled or reclaimed before.\\n```\\n function reclaimContract(Order calldata order) public nonReentrant {\\n bytes32 orderHash = hashOrder(order);\\n\\n // ContractId\\n uint contractId = uint(orderHash);\\n\\n address bull = bulls[contractId];\\n\\n // Check that the contract is expired\\n require(block.timestamp > order.expiry, "NOT_EXPIRED_CONTRACT");\\n\\n // Check that the contract is not settled\\n require(!settledContracts[contractId], "SETTLED_CONTRACT");\\n\\n // Check that the contract is not reclaimed\\n require(!reclaimedContracts[contractId], "RECLAIMED_CONTRACT");\\n\\n uint bullAssetAmount = order.premium + order.collateral;\\n if (bullAssetAmount > 0) {\\n // Transfer payment tokens to the Bull\\n IERC20(order.asset).safeTransfer(bull, bullAssetAmount);\\n }\\n\\n reclaimedContracts[contractId] = true;\\n\\n emit ReclaimedContract(orderHash, order);\\n }\\n```\\n | There are multiple solutions for this problem.\\ncheck `bulls[contractId] != address(0)`\\ncheck the order is matched `matchedOrders[contractId].maker != address(0)` | An attacker can use this vulnerability to transfer assets from BvB to address(0). It results in serious loss of funds. | ```\\n function reclaimContract(Order calldata order) public nonReentrant {\\n bytes32 orderHash = hashOrder(order);\\n\\n // ContractId\\n uint contractId = uint(orderHash);\\n\\n address bull = bulls[contractId];\\n\\n // Check that the contract is expired\\n require(block.timestamp > order.expiry, "NOT_EXPIRED_CONTRACT");\\n\\n // Check that the contract is not settled\\n require(!settledContracts[contractId], "SETTLED_CONTRACT");\\n\\n // Check that the contract is not reclaimed\\n require(!reclaimedContracts[contractId], "RECLAIMED_CONTRACT");\\n\\n uint bullAssetAmount = order.premium + order.collateral;\\n if (bullAssetAmount > 0) {\\n // Transfer payment tokens to the Bull\\n IERC20(order.asset).safeTransfer(bull, bullAssetAmount);\\n }\\n\\n reclaimedContracts[contractId] = true;\\n\\n emit ReclaimedContract(orderHash, order);\\n }\\n```\\n |
Transferring Ownership Might Break The Market | medium | After the transfer of the market ownership, the market might stop working, and no one could purchase any bond token from the market leading to a loss of sale for the market makers.\\nThe `callbackAuthorized` mapping contains a list of whitelisted market owners authorized to use the callback. When the users call the `purchaseBond` function, it will check at Line 390 if the current market owner is still authorized to use a callback. Otherwise, the function will revert.\\n```\\nFile: BondBaseSDA.sol\\n function purchaseBond(\\n uint256 id_,\\n uint256 amount_,\\n uint256 minAmountOut_\\n ) external override returns (uint256 payout) {\\n if (msg.sender != address(_teller)) revert Auctioneer_NotAuthorized();\\n\\n BondMarket storage market = markets[id_];\\n BondTerms memory term = terms[id_];\\n\\n // If market uses a callback, check that owner is still callback authorized\\n if (market.callbackAddr != address(0) && !callbackAuthorized[market.owner])\\n revert Auctioneer_NotAuthorized();\\n```\\n\\nHowever, if the market owner transfers the market ownership to someone else. The market will stop working because the new market owner might not be on the list of whitelisted market owners (callbackAuthorized mapping). As such, no one can purchase any bond token.\\n```\\nFile: BondBaseSDA.sol\\n function pushOwnership(uint256 id_, address newOwner_) external override {\\n if (msg.sender != markets[id_].owner) revert Auctioneer_OnlyMarketOwner();\\n newOwners[id_] = newOwner_;\\n }\\n```\\n | Before pushing the ownership, if the market uses a callback, implement an additional validation check to ensure that the new market owner has been whitelisted to use the callback. This will ensure that transferring the market ownership will not break the market due to the new market owner not being whitelisted.\\n```\\nfunction pushOwnership(uint256 id_, address newOwner_) external override {\\n if (msg.sender != markets[id_].owner) revert Auctioneer_OnlyMarketOwner();\\n// Add the line below\\n if (markets[id_].callbackAddr != address(0) && !callbackAuthorized[newOwner_])\\n// Add the line below\\n revert newOwnerNotAuthorizedToUseCallback();\\n newOwners[id_] = newOwner_;\\n}\\n```\\n | After the transfer of the market ownership, the market might stop working, and no one could purchase any bond token from the market leading to a loss of sale for the market makers. | ```\\nFile: BondBaseSDA.sol\\n function purchaseBond(\\n uint256 id_,\\n uint256 amount_,\\n uint256 minAmountOut_\\n ) external override returns (uint256 payout) {\\n if (msg.sender != address(_teller)) revert Auctioneer_NotAuthorized();\\n\\n BondMarket storage market = markets[id_];\\n BondTerms memory term = terms[id_];\\n\\n // If market uses a callback, check that owner is still callback authorized\\n if (market.callbackAddr != address(0) && !callbackAuthorized[market.owner])\\n revert Auctioneer_NotAuthorized();\\n```\\n |
Market Price Lower Than Expected | medium | The market price does not conform to the specification documented within the whitepaper. As a result, the computed market price is lower than expected.\\nThe following definition of the market price is taken from the whitepaper. Taken from Page 13 of the whitepaper - Definition 25\\n\\nThe integer implementation of the market price must be rounded up per the whitepaper. This ensures that the integer implementation of the market price is greater than or equal to the real value of the market price so as to protect makers from selling tokens at a lower price than expected.\\nWithin the `BondBaseSDA.marketPrice` function, the computation of the market price is rounded up in Line 688, which conforms to the specification.\\n```\\nFile: BondBaseSDA.sol\\n function marketPrice(uint256 id_) public view override returns (uint256) {\\n uint256 price = currentControlVariable(id_).mulDivUp(currentDebt(id_), markets[id_].scale);\\n\\n return (price > markets[id_].minPrice) ? price : markets[id_].minPrice;\\n }\\n```\\n\\nHowever, within the `BondBaseSDA._currentMarketPrice` function, the market price is rounded down, resulting in the makers selling tokens at a lower price than expected.\\n```\\nFile: BondBaseSDA.sol\\n function _currentMarketPrice(uint256 id_) internal view returns (uint256) {\\n BondMarket memory market = markets[id_];\\n return terms[id_].controlVariable.mulDiv(market.totalDebt, market.scale);\\n }\\n```\\n | Ensure the market price is rounded up so that the desired property can be achieved and the makers will not be selling tokens at a lower price than expected.\\n```\\nfunction _currentMarketPrice(uint256 id_) internal view returns (uint256) {\\n BondMarket memory market = markets[id_];\\n// Remove the line below\\n return terms[id_].controlVariable.mulDiv(market.totalDebt, market.scale);\\n// Add the line below\\n return terms[id_].controlVariable.mulDivUp(market.totalDebt, market.scale);\\n}\\n```\\n | Loss for the makers as their tokens are sold at a lower price than expected.\\nAdditionally, the affected `BondBaseSDA._currentMarketPrice` function is used within the `BondBaseSDA._decayAndGetPrice` function to derive the market price. Since a lower market price will be returned, this will lead to a higher amount of payout tokens. Subsequently, the `lastDecayIncrement` will be higher than expected, which will lead to a lower `totalDebt`. Lower debt means a lower market price will be computed later. | ```\\nFile: BondBaseSDA.sol\\n function marketPrice(uint256 id_) public view override returns (uint256) {\\n uint256 price = currentControlVariable(id_).mulDivUp(currentDebt(id_), markets[id_].scale);\\n\\n return (price > markets[id_].minPrice) ? price : markets[id_].minPrice;\\n }\\n```\\n |
Teller Cannot Be Removed From Callback Contract | medium | If a vulnerable Teller is being exploited by an attacker, there is no way for the owner of the Callback Contract to remove the vulnerable Teller from their Callback Contract.\\nThe Callback Contract is missing the feature to remove a Teller. Once a Teller has been added to the whitelist (approvedMarkets mapping), it is not possible to remove the Teller from the whitelist.\\n```\\nFile: BondBaseCallback.sol\\n /* ========== WHITELISTING ========== */\\n\\n /// @inheritdoc IBondCallback\\n function whitelist(address teller_, uint256 id_) external override onlyOwner {\\n // Check that the market id is a valid, live market on the aggregator\\n try _aggregator.isLive(id_) returns (bool live) {\\n if (!live) revert Callback_MarketNotSupported(id_);\\n } catch {\\n revert Callback_MarketNotSupported(id_);\\n }\\n\\n // Check that the provided teller is the teller for the market ID on the stored aggregator\\n // We could pull the teller from the aggregator, but requiring the teller to be passed in\\n // is more explicit about which contract is being whitelisted\\n if (teller_ != address(_aggregator.getTeller(id_))) revert Callback_TellerMismatch();\\n\\n approvedMarkets[teller_][id_] = true;\\n }\\n```\\n | Consider implementing an additional function to allow the removal of a Teller from the whitelist (approvedMarkets mapping), so that a vulnerable Teller can be removed swiftly if needed.\\n```\\nfunction removeFromWhitelist(address teller_, uint256 id_) external override onlyOwner {\\n approvedMarkets[teller_][id_] = false;\\n}\\n```\\n\\nNote: Although the owner of the Callback Contract can DOS its own market by abusing the `removeFromWhitelist` function, no sensible owner would do so. | In the event that a whitelisted Teller is found to be vulnerable and has been actively exploited by an attacker in the wild, the owner of the Callback Contract needs to mitigate the issue swiftly by removing the vulnerable Teller from the Callback Contract to stop it from draining the asset within the Callback Contract. However, the mitigation effort will be hindered by the fact there is no way to remove a Teller within the Callback Contract once it has been whitelisted. Thus, it might not be possible to stop the attacker from exploiting the vulnerable Teller to drain assets within the Callback Contract. The Callback Contract owners would need to find a workaround to block the attack, which will introduce an unnecessary delay to the recovery process where every second counts.\\nAdditionally, if the owner accidentally whitelisted the wrong Teller, there is no way to remove it. | ```\\nFile: BondBaseCallback.sol\\n /* ========== WHITELISTING ========== */\\n\\n /// @inheritdoc IBondCallback\\n function whitelist(address teller_, uint256 id_) external override onlyOwner {\\n // Check that the market id is a valid, live market on the aggregator\\n try _aggregator.isLive(id_) returns (bool live) {\\n if (!live) revert Callback_MarketNotSupported(id_);\\n } catch {\\n revert Callback_MarketNotSupported(id_);\\n }\\n\\n // Check that the provided teller is the teller for the market ID on the stored aggregator\\n // We could pull the teller from the aggregator, but requiring the teller to be passed in\\n // is more explicit about which contract is being whitelisted\\n if (teller_ != address(_aggregator.getTeller(id_))) revert Callback_TellerMismatch();\\n\\n approvedMarkets[teller_][id_] = true;\\n }\\n```\\n |
`BondAggregator.findMarketFor` Function Will Break In Certain Conditions | medium | `BondAggregator.findMarketFor` function will break when the `BondBaseSDA.payoutFor` function within the for-loop reverts under certain conditions.\\nThe `BondBaseSDA.payoutFor` function will revert if the computed payout is larger than the market's max payout. Refer to Line 711 below.\\n```\\nFile: BondBaseSDA.sol\\n function payoutFor(\\n uint256 amount_,\\n uint256 id_,\\n address referrer_\\n ) public view override returns (uint256) {\\n // Calculate the payout for the given amount of tokens\\n uint256 fee = amount_.mulDiv(_teller.getFee(referrer_), 1e5);\\n uint256 payout = (amount_ - fee).mulDiv(markets[id_].scale, marketPrice(id_));\\n\\n // Check that the payout is less than or equal to the maximum payout,\\n // Revert if not, otherwise return the payout\\n if (payout > markets[id_].maxPayout) {\\n revert Auctioneer_MaxPayoutExceeded();\\n } else {\\n return payout;\\n }\\n }\\n```\\n\\nThe `BondAggregator.findMarketFor` function will call the `BondBaseSDA.payoutFor` function at Line 245. The `BondBaseSDA.payoutFor` function will revert if the final computed payout is larger than the `markets[id_].maxPayout` as mentioned earlier. This will cause the entire for-loop to "break" and the transaction to revert.\\nAssume that the user configures the `minAmountOut_` to be `0`, then the condition `minAmountOut_ <= maxPayout` Line 244 will always be true. The `amountIn_` will always be passed to the `payoutFor` function. In some markets where the computed payout is larger than the market's max payout, the `BondAggregator.findMarketFor` function will revert.\\n```\\nFile: BondAggregator.sol\\n /// @inheritdoc IBondAggregator\\n function findMarketFor(\\n address payout_,\\n address quote_,\\n uint256 amountIn_,\\n uint256 minAmountOut_,\\n uint256 maxExpiry_\\n ) external view returns (uint256) {\\n uint256[] memory ids = marketsFor(payout_, quote_);\\n uint256 len = ids.length;\\n uint256[] memory payouts = new uint256[](len);\\n\\n uint256 highestOut;\\n uint256 id = type(uint256).max; // set to max so an empty set doesn't return 0, the first index\\n uint48 vesting;\\n uint256 maxPayout;\\n IBondAuctioneer auctioneer;\\n for (uint256 i; i < len; ++i) {\\n auctioneer = marketsToAuctioneers[ids[i]];\\n (, , , , vesting, maxPayout) = auctioneer.getMarketInfoForPurchase(ids[i]);\\n\\n uint256 expiry = (vesting <= MAX_FIXED_TERM) ? block.timestamp + vesting : vesting;\\n\\n if (expiry <= maxExpiry_) {\\n payouts[i] = minAmountOut_ <= maxPayout\\n ? payoutFor(amountIn_, ids[i], address(0))\\n : 0;\\n\\n if (payouts[i] > highestOut) {\\n highestOut = payouts[i];\\n id = ids[i];\\n }\\n }\\n }\\n\\n return id;\\n }\\n```\\n | Consider using try-catch or address.call to handle the revert of the `BondBaseSDA.payoutFor` function within the for-loop gracefully. This ensures that a single revert of the `BondBaseSDA.payoutFor` function will not affect the entire for-loop within the `BondAggregator.findMarketFor` function. | The find market feature within the protocol is broken under certain conditions. As such, users would not be able to obtain the list of markets that meet their requirements. The market makers affected by this issue will lose the opportunity to sell their bond tokens. | ```\\nFile: BondBaseSDA.sol\\n function payoutFor(\\n uint256 amount_,\\n uint256 id_,\\n address referrer_\\n ) public view override returns (uint256) {\\n // Calculate the payout for the given amount of tokens\\n uint256 fee = amount_.mulDiv(_teller.getFee(referrer_), 1e5);\\n uint256 payout = (amount_ - fee).mulDiv(markets[id_].scale, marketPrice(id_));\\n\\n // Check that the payout is less than or equal to the maximum payout,\\n // Revert if not, otherwise return the payout\\n if (payout > markets[id_].maxPayout) {\\n revert Auctioneer_MaxPayoutExceeded();\\n } else {\\n return payout;\\n }\\n }\\n```\\n |
Debt Decay Faster Than Expected | medium | The debt decay at a rate faster than expected, causing market makers to sell bond tokens at a lower price than expected.\\nThe following definition of the debt decay reference time following any purchases at time `t` taken from the whitepaper. The second variable, which is the delay increment, is rounded up. Following is taken from Page 15 of the whitepaper - Definition 27\\n\\nHowever, the actual implementation in the codebase differs from the specification. At Line 514, the delay increment is rounded down instead.\\n```\\nFile: BondBaseSDA.sol\\n // Set last decay timestamp based on size of purchase to linearize decay\\n uint256 lastDecayIncrement = debtDecayInterval.mulDiv(payout_, lastTuneDebt);\\n metadata[id_].lastDecay += uint48(lastDecayIncrement);\\n```\\n | When computing the `lastDecayIncrement`, the result should be rounded up.\\n```\\n// Set last decay timestamp based on size of purchase to linearize decay\\n// Remove the line below\\n uint256 lastDecayIncrement = debtDecayInterval.mulDiv(payout_, lastTuneDebt);\\n// Add the line below\\n uint256 lastDecayIncrement = debtDecayInterval.mulDivUp(payout_, lastTuneDebt);\\nmetadata[id_].lastDecay // Add the line below\\n= uint48(lastDecayIncrement);\\n```\\n | When the delay increment (TD) is rounded down, the debt decay reference time increment will be smaller than expected. The debt component will then decay at a faster rate. As a result, the market price will not be adjusted in an optimized manner, and the market price will fall faster than expected, causing market makers to sell bond tokens at a lower price than expected.\\nFollowing is taken from Page 8 of the whitepaper - Definition 8\\n | ```\\nFile: BondBaseSDA.sol\\n // Set last decay timestamp based on size of purchase to linearize decay\\n uint256 lastDecayIncrement = debtDecayInterval.mulDiv(payout_, lastTuneDebt);\\n metadata[id_].lastDecay += uint48(lastDecayIncrement);\\n```\\n |
Fixed Term Bond tokens can be minted with non-rounded expiry | medium | Fixed Term Tellers intend to mint tokens that expire once per day, to consolidate liquidity and create a uniform experience. However, this rounding is not enforced on the external `deploy()` function, which allows for tokens expiring at unexpected times.\\nIn `BondFixedTermTeller.sol`, new tokenIds are deployed through the `_handlePayout()` function. The function calculates the expiry (rounded down to the nearest day), uses this expiry to create a tokenId, and — if that tokenId doesn't yet exist — deploys it.\\n```\\n// rest of code\\nexpiry = ((vesting_ + uint48(block.timestamp)) / uint48(1 days)) * uint48(1 days);\\n\\n// Fixed-term user payout information is handled in BondTeller.\\n// Teller mints ERC-1155 bond tokens for user.\\nuint256 tokenId = getTokenId(payoutToken_, expiry);\\n\\n// Create new bond token if it doesn't exist yet\\nif (!tokenMetadata[tokenId].active) {\\n _deploy(tokenId, payoutToken_, expiry);\\n}\\n// rest of code\\n```\\n\\nThis successfully consolidates all liquidity into one daily tokenId, which expires (as expected) at the time included in the tokenId.\\nHowever, if the `deploy()` function is called directly, no such rounding occurs:\\n```\\nfunction deploy(ERC20 underlying_, uint48 expiry_)\\n external\\n override\\n nonReentrant\\n returns (uint256)\\n{\\n uint256 tokenId = getTokenId(underlying_, expiry_);\\n // Only creates token if it does not exist\\n if (!tokenMetadata[tokenId].active) {\\n _deploy(tokenId, underlying_, expiry_);\\n }\\n return tokenId;\\n}\\n```\\n\\nThis creates a mismatch between the tokenId time and the real expiry time, as tokenId is calculated by rounding the expiry down to the nearest day:\\n```\\nuint256 tokenId = uint256(\\n keccak256(abi.encodePacked(underlying_, expiry_ / uint48(1 days)))\\n);\\n```\\n\\n... while the `_deploy()` function saves the original expiry:\\n```\\ntokenMetadata[tokenId_] = TokenMetadata(\\n true,\\n underlying_,\\n uint8(underlying_.decimals()),\\n expiry_,\\n 0\\n);\\n```\\n | Include the same rounding process in `deploy()` as is included in _handlePayout():\\n```\\nfunction deploy(ERC20 underlying_, uint48 expiry_)\\n external\\n override\\n nonReentrant\\n returns (uint256)\\n {\\n expiry = ((vesting_ + uint48(block.timestamp)) / uint48(1 days)) * uint48(1 days);\\n uint256 tokenId = getTokenId(underlying_, expiry_);\\n // rest of code\\n```\\n | The `deploy()` function causes a number of issues:\\nTokens can be deployed that don't expire at the expected daily time, which may cause issues with your front end or break user's expectations\\nTokens can expire at times that don't align with the time included in the tokenId\\nMalicious users can pre-deploy tokens at future timestamps to "take over" the token for a given day and lock it at a later time stamp, which then "locks in" that expiry time and can't be changed by the protocol | ```\\n// rest of code\\nexpiry = ((vesting_ + uint48(block.timestamp)) / uint48(1 days)) * uint48(1 days);\\n\\n// Fixed-term user payout information is handled in BondTeller.\\n// Teller mints ERC-1155 bond tokens for user.\\nuint256 tokenId = getTokenId(payoutToken_, expiry);\\n\\n// Create new bond token if it doesn't exist yet\\nif (!tokenMetadata[tokenId].active) {\\n _deploy(tokenId, payoutToken_, expiry);\\n}\\n// rest of code\\n```\\n |
Fixed Term Teller tokens can be created with an expiry in the past | high | The Fixed Term Teller does not allow tokens to be created with a timestamp in the past. This is a fact that protocols using this feature will expect to hold and build their systems around. However, users can submit expiry timestamps slightly in the future, which correlate to tokenIds in the past, which allows them to bypass this check.\\nIn `BondFixedTermTeller.sol`, the `create()` function allows protocols to trade their payout tokens directly for bond tokens. The expectation is that protocols will build their own mechanisms around this. It is explicitly required that they cannot do this for bond tokens that expire in the past, only those that have yet to expire:\\n```\\nif (expiry_ < block.timestamp) revert Teller_InvalidParams();\\n```\\n\\nHowever, because tokenIds round timestamps down to the latest day, protocols are able to get around this check.\\nHere's an example:\\nThe most recently expired token has an expiration time of 1668524400 (correlates to 9am this morning)\\nIt is currently 1668546000 (3pm this afternoon)\\nA protocol calls create() with an expiry of 1668546000 + 1\\nThis passes the check that `expiry_ >= block.timestamp`\\nWhen the expiry is passed to `getTokenId()` it rounds the time down to the latest day, which is the day corresponding with 9am this morning\\nThis expiry associated with this tokenId is 9am this morning, so they are able to redeem their tokens instantly | Before checking whether `expiry_ < block.timestamp`, expiry should be rounded to the nearest day:\\n```\\nexpiry = ((vesting_ + uint48(block.timestamp)) / uint48(1 days)) * uint48(1 days);\\n```\\n | Protocols can bypass the check that all created tokens must have an expiry in the future, and mint tokens with a past expiry that can be redeemed immediately.\\nThis may not cause a major problem for Bond Protocol itself, but protocols will be building on top of this feature without expecting this behavior.\\nLet's consider, for example, a protocol that builds a mechanism where users can stake some asset, and the protocol will trade payout tokens to create bond tokens for them at a discount, with the assumption that they will expire in the future. This issue could create an opening for a savvy user to stake, mint bond tokens, redeem and dump them immediately, buy more assets to stake, and continue this cycle to earn arbitrage returns and tank the protocol's token.\\nBecause there are a number of situations like the one above where this issue could lead to a major loss of funds for a protocol building on top of Bond, I consider this a high severity. | ```\\nif (expiry_ < block.timestamp) revert Teller_InvalidParams();\\n```\\n |
findMarketFor() missing check minAmountOut_ | medium | BondAggregator#findMarketFor() minAmountOut_ does not actually take effect,may return a market's "payout" smaller than minAmountOut_ , Causes users to waste gas calls to purchase\\nBondAggregator#findMarketFor() has check minAmountOut_ <= maxPayout but the actual "payout" by "amountIn_" no check greater than minAmountOut_\\n```\\n function findMarketFor(\\n address payout_,\\n address quote_,\\n uint256 amountIn_,\\n uint256 minAmountOut_,\\n uint256 maxExpiry_\\n ) external view returns (uint256) {\\n// rest of code\\n if (expiry <= maxExpiry_) {\\n payouts[i] = minAmountOut_ <= maxPayout\\n ? payoutFor(amountIn_, ids[i], address(0))\\n : 0;\\n\\n if (payouts[i] > highestOut) {//****@audit not check payouts[i] >= minAmountOut_******//\\n highestOut = payouts[i];\\n id = ids[i];\\n }\\n }\\n```\\n | ```\\n function findMarketFor(\\n address payout_,\\n address quote_,\\n uint256 amountIn_,\\n uint256 minAmountOut_,\\n uint256 maxExpiry_\\n ) external view returns (uint256) {\\n// rest of code\\n if (expiry <= maxExpiry_) {\\n payouts[i] = minAmountOut_ <= maxPayout\\n ? payoutFor(amountIn_, ids[i], address(0))\\n : 0;\\n\\n- if (payouts[i] > highestOut) {\\n+ if (payouts[i] >= minAmountOut_ && payouts[i] > highestOut) {\\n highestOut = payouts[i];\\n id = ids[i];\\n }\\n }\\n```\\n | The user gets the optimal market through BondAggregator#findMarketFor(), but incorrectly returns a market smaller than minAmountOut_, and the call to purchase must fail, resulting in wasted gas | ```\\n function findMarketFor(\\n address payout_,\\n address quote_,\\n uint256 amountIn_,\\n uint256 minAmountOut_,\\n uint256 maxExpiry_\\n ) external view returns (uint256) {\\n// rest of code\\n if (expiry <= maxExpiry_) {\\n payouts[i] = minAmountOut_ <= maxPayout\\n ? payoutFor(amountIn_, ids[i], address(0))\\n : 0;\\n\\n if (payouts[i] > highestOut) {//****@audit not check payouts[i] >= minAmountOut_******//\\n highestOut = payouts[i];\\n id = ids[i];\\n }\\n }\\n```\\n |
Existing Circuit Breaker Implementation Allow Faster Taker To Extract Payout Tokens From Market | medium | The current implementation of the circuit breaker is not optimal. Thus, the market maker will lose an excessive amount of payout tokens if a quoted token suddenly loses a large amount of value, even with a circuit breaker in place.\\nWhen the amount of the payout tokens purchased by the taker exceeds the `term.maxDebt`, the taker is still allowed to carry on with the transaction, and the market will only be closed after the current transaction is completed.\\n```\\nFile: BondBaseSDA.sol\\n // Circuit breaker. If max debt is breached, the market is closed\\n if (term.maxDebt < market.totalDebt) {\\n _close(id_);\\n } else {\\n // If market will continue, the control variable is tuned to to expend remaining capacity over remaining market duration\\n _tune(id_, currentTime, price);\\n }\\n```\\n\\nAssume that the state of the SDAM at T0 is as follows:\\n`term.maxDebt` is 110 (debt buffer = 10%)\\n`maxPayout` is 100\\n`market.totalDebt` is 99\\nAssume that the quoted token suddenly loses a large amount of value (e.g. stablecoin depeg causing the quote token to drop to almost zero). Bob decided to purchase as many payout tokens as possible before reaching the `maxPayout` limit to maximize the value he could extract from the market. Assume that Bob is able to purchase 50 bond tokens at T1 before reaching the `maxPayout` limit. As such, the state of the SDAM at T1 will be as follows:\\n`term.maxDebt` = 110\\n`maxPayout` = 100\\n`market.totalDebt` = 99 + 50 = 149\\nIn the above scenario, Bob's purchase has already breached the `term.maxDebt` limit. However, he could still purchase the 50 bond tokens in the current transaction. | Considering only allowing takers to purchase bond tokens up to the `term.maxDebt` limit.\\nFor instance, based on the earlier scenario, only allow Bob to purchase up to 11 bond tokens (term.maxDebt[110] - market.totalDebt[99]) instead of allowing him to purchase 50 bond tokens.\\nIf Bob attempts to purchase 50 bond tokens, the market can proceed to purchase the 11 bond tokens for Bob, and the remaining quote tokens can be refunded back to Bob. After that, since the `term.maxDebt (110) == market.totalDebt (110)`, the market can trigger the circuit breaker to close the market to protect the market from potential extreme market conditions.\\nThis ensures that bond tokens beyond the `term.maxDebt` limit would not be sold to the taker during extreme market conditions. | In the event that the price of the quote token falls to almost zero (e.g. 0.0001 dollars), then the fastest taker will be able to extract as many payout tokens as possible before reaching the `maxPayout` limit from the market. The extracted payout tokens are essentially free for the fastest taker. Taker gain is maker loss.\\nAdditionally, in the event that a quoted token suddenly loses a large amount of value, the amount of payout tokens lost by the market marker is capped at the `maxPayout` limit instead of capping the loss at the `term.maxDebt` limit. This resulted in the market makers losing more payout tokens than expected, and their payout tokens being sold to the takers at a very low price (e.g. 0.0001 dollars).\\nThe market makers will suffer more loss if the `maxPayout` limit of their markets is higher. | ```\\nFile: BondBaseSDA.sol\\n // Circuit breaker. If max debt is breached, the market is closed\\n if (term.maxDebt < market.totalDebt) {\\n _close(id_);\\n } else {\\n // If market will continue, the control variable is tuned to to expend remaining capacity over remaining market duration\\n _tune(id_, currentTime, price);\\n }\\n```\\n |
Create Fee Discount Feature Is Broken | medium | The create fee discount feature is found to be broken within the protocol.\\nThe create fee discount feature relies on the `createFeeDiscount` state variable to determine the fee to be discounted from the protocol fee. However, it was observed that there is no way to initialize the `createFeeDiscount` state variable. As a result, the `createFeeDiscount` state variable will always be zero.\\n```\\nFile: BondFixedExpiryTeller.sol\\n // If fee is greater than the create discount, then calculate the fee and store it\\n // Otherwise, fee is zero.\\n if (protocolFee > createFeeDiscount) {\\n // Calculate fee amount\\n uint256 feeAmount = amount_.mulDiv(protocolFee - createFeeDiscount, FEE_DECIMALS);\\n rewards[_protocol][underlying_] += feeAmount;\\n\\n // Mint new bond tokens\\n bondToken.mint(msg.sender, amount_ - feeAmount);\\n\\n return (bondToken, amount_ - feeAmount);\\n } else {\\n // Mint new bond tokens\\n bondToken.mint(msg.sender, amount_);\\n\\n return (bondToken, amount_);\\n }\\n```\\n\\n```\\nFile: BondFixedTermTeller.sol\\n // If fee is greater than the create discount, then calculate the fee and store it\\n // Otherwise, fee is zero.\\n if (protocolFee > createFeeDiscount) {\\n // Calculate fee amount\\n uint256 feeAmount = amount_.mulDiv(protocolFee - createFeeDiscount, FEE_DECIMALS);\\n rewards[_protocol][underlying_] += feeAmount;\\n\\n // Mint new bond tokens\\n _mintToken(msg.sender, tokenId, amount_ - feeAmount);\\n\\n return (tokenId, amount_ - feeAmount);\\n } else {\\n // Mint new bond tokens\\n _mintToken(msg.sender, tokenId, amount_);\\n\\n return (tokenId, amount_);\\n }\\n```\\n | Implement a setter method for the `createFeeDiscount` state variable and the necessary verification checks.\\n```\\nfunction setCreateFeeDiscount(uint48 createFeeDiscount_) external requiresAuth {\\n if (createFeeDiscount_ > protocolFee) revert Teller_InvalidParams();\\n if (createFeeDiscount_ > 5e3) revert Teller_InvalidParams();\\n createFeeDiscount = createFeeDiscount_;\\n}\\n```\\n | The create fee discount feature is broken within the protocol. There is no way for the protocol team to configure a discount for the users of the `BondFixedExpiryTeller.create` and `BondFixedTermTeller.create` functions. As such, the users will not obtain any discount from the protocol when using the create function. | ```\\nFile: BondFixedExpiryTeller.sol\\n // If fee is greater than the create discount, then calculate the fee and store it\\n // Otherwise, fee is zero.\\n if (protocolFee > createFeeDiscount) {\\n // Calculate fee amount\\n uint256 feeAmount = amount_.mulDiv(protocolFee - createFeeDiscount, FEE_DECIMALS);\\n rewards[_protocol][underlying_] += feeAmount;\\n\\n // Mint new bond tokens\\n bondToken.mint(msg.sender, amount_ - feeAmount);\\n\\n return (bondToken, amount_ - feeAmount);\\n } else {\\n // Mint new bond tokens\\n bondToken.mint(msg.sender, amount_);\\n\\n return (bondToken, amount_);\\n }\\n```\\n |
Auctioneer Cannot Be Removed From The Protocol | medium | If a vulnerable Auctioneer is being exploited by an attacker, there is no way to remove the vulnerable Auctioneer from the protocol.\\nThe protocol is missing the feature to remove an auctioneer. Once an auctioneer has been added to the whitelist, it is not possible to remove the auctioneer from the whitelist.\\n```\\nFile: BondAggregator.sol\\n function registerAuctioneer(IBondAuctioneer auctioneer_) external requiresAuth {\\n // Restricted to authorized addresses\\n\\n // Check that the auctioneer is not already registered\\n if (_whitelist[address(auctioneer_)])\\n revert Aggregator_AlreadyRegistered(address(auctioneer_));\\n\\n // Add the auctioneer to the whitelist\\n auctioneers.push(auctioneer_);\\n _whitelist[address(auctioneer_)] = true;\\n }\\n```\\n | Consider implementing an additional function to allow the removal of an Auctioneer from the whitelist, so that vulnerable Auctioneer can be removed swiftly if needed.\\n```\\nfunction deregisterAuctioneer(IBondAuctioneer auctioneer_) external requiresAuth {\\n // Remove the auctioneer from the whitelist\\n _whitelist[address(auctioneer_)] = false;\\n}\\n```\\n | In the event that a whitelisted Auctioneer is found to be vulnerable and has been actively exploited by an attacker in the wild, the protocol needs to mitigate the issue swiftly by removing the vulnerable Auctioneer from the protocol. However, the mitigation effort will be hindered by the fact there is no way to remove an Auctioneer within the protocol once it has been whitelisted. Thus, it might not be possible to stop the attacker from exploiting the vulnerable Auctioneer. The protocol team would need to find a workaround to block the attack, which will introduce an unnecessary delay to the recovery process where every second counts.\\nAdditionally, if the admin accidentally whitelisted the wrong Auctioneer, there is no way to remove it. | ```\\nFile: BondAggregator.sol\\n function registerAuctioneer(IBondAuctioneer auctioneer_) external requiresAuth {\\n // Restricted to authorized addresses\\n\\n // Check that the auctioneer is not already registered\\n if (_whitelist[address(auctioneer_)])\\n revert Aggregator_AlreadyRegistered(address(auctioneer_));\\n\\n // Add the auctioneer to the whitelist\\n auctioneers.push(auctioneer_);\\n _whitelist[address(auctioneer_)] = true;\\n }\\n```\\n |
BondBaseSDA.setDefaults doesn't validate inputs | medium | BondBaseSDA.setDefaults doesn't validate inputs which can lead to initializing new markets incorrectly\\n```\\n function setDefaults(uint32[6] memory defaults_) external override requiresAuth {\\n // Restricted to authorized addresses\\n defaultTuneInterval = defaults_[0];\\n defaultTuneAdjustment = defaults_[1];\\n minDebtDecayInterval = defaults_[2];\\n minDepositInterval = defaults_[3];\\n minMarketDuration = defaults_[4];\\n minDebtBuffer = defaults_[5];\\n }\\n```\\n\\nFunction BondBaseSDA.setDefaults doesn't do any checkings, as you can see. Because of that it's possible to provide values that will break market functionality.\\nFor example you can set `minDepositInterval` to be bigger than `minMarketDuration` and it will be not possible to create new market.\\nOr you can provide `minDebtBuffer` to be 100% ot 0% that will break logic of market closing. | Add input validation. | Can't create new market or market logic will be not working as designed. | ```\\n function setDefaults(uint32[6] memory defaults_) external override requiresAuth {\\n // Restricted to authorized addresses\\n defaultTuneInterval = defaults_[0];\\n defaultTuneAdjustment = defaults_[1];\\n minDebtDecayInterval = defaults_[2];\\n minDepositInterval = defaults_[3];\\n minMarketDuration = defaults_[4];\\n minDebtBuffer = defaults_[5];\\n }\\n```\\n |
BondAggregator.liveMarketsBy eventually will revert because of block gas limit | medium | BondAggregator.liveMarketsBy eventually will revert because of block gas limit\\n```\\n function liveMarketsBy(address owner_) external view returns (uint256[] memory) {\\n uint256 count;\\n IBondAuctioneer auctioneer;\\n for (uint256 i; i < marketCounter; ++i) {\\n auctioneer = marketsToAuctioneers[i];\\n if (auctioneer.isLive(i) && auctioneer.ownerOf(i) == owner_) {\\n ++count;\\n }\\n }\\n\\n\\n uint256[] memory ids = new uint256[](count);\\n count = 0;\\n for (uint256 i; i < marketCounter; ++i) {\\n auctioneer = marketsToAuctioneers[i];\\n if (auctioneer.isLive(i) && auctioneer.ownerOf(i) == owner_) {\\n ids[count] = i;\\n ++count;\\n }\\n }\\n\\n\\n return ids;\\n }\\n```\\n\\nBondAggregator.liveMarketsBy function is looping through all markets and does at least `marketCounter` amount of external calls(when all markets are not live) and at most 4 * `marketCounter` external calls(when all markets are live and owner matches. This all consumes a lot of gas, even that is called from view function. And each new market increases loop size.\\nThat means that after some time `marketsToAuctioneers` mapping will be big enough that the gas amount sent for view/pure function will be not enough to retrieve all data(50 million gas according to this). So the function will revert.\\nAlso similar problem is with `findMarketFor`, `marketsFor` and `liveMarketsFor` functions. | Remove not active markets or some start and end indices to functions. | Functions will always revert and whoever depends on it will not be able to get information. | ```\\n function liveMarketsBy(address owner_) external view returns (uint256[] memory) {\\n uint256 count;\\n IBondAuctioneer auctioneer;\\n for (uint256 i; i < marketCounter; ++i) {\\n auctioneer = marketsToAuctioneers[i];\\n if (auctioneer.isLive(i) && auctioneer.ownerOf(i) == owner_) {\\n ++count;\\n }\\n }\\n\\n\\n uint256[] memory ids = new uint256[](count);\\n count = 0;\\n for (uint256 i; i < marketCounter; ++i) {\\n auctioneer = marketsToAuctioneers[i];\\n if (auctioneer.isLive(i) && auctioneer.ownerOf(i) == owner_) {\\n ids[count] = i;\\n ++count;\\n }\\n }\\n\\n\\n return ids;\\n }\\n```\\n |
meta.tuneBelowCapacity param is not updated when BondBaseSDA.setIntervals is called | medium | When BondBaseSDA.setIntervals function is called then meta.tuneBelowCapacity param is not updated which has impact on price tuning.\\n```\\n function setIntervals(uint256 id_, uint32[3] calldata intervals_) external override {\\n // Check that the market is live\\n if (!isLive(id_)) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that the intervals are non-zero\\n if (intervals_[0] == 0 || intervals_[1] == 0 || intervals_[2] == 0)\\n revert Auctioneer_InvalidParams();\\n\\n\\n // Check that tuneInterval >= tuneAdjustmentDelay\\n if (intervals_[0] < intervals_[1]) revert Auctioneer_InvalidParams();\\n\\n\\n BondMetadata storage meta = metadata[id_];\\n // Check that tuneInterval >= depositInterval\\n if (intervals_[0] < meta.depositInterval) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that debtDecayInterval >= minDebtDecayInterval\\n if (intervals_[2] < minDebtDecayInterval) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that sender is market owner\\n BondMarket memory market = markets[id_];\\n if (msg.sender != market.owner) revert Auctioneer_OnlyMarketOwner();\\n\\n\\n // Update intervals\\n meta.tuneInterval = intervals_[0];\\n meta.tuneIntervalCapacity = market.capacity.mulDiv(\\n uint256(intervals_[0]),\\n uint256(terms[id_].conclusion) - block.timestamp\\n ); // don't have a stored value for market duration, this will update tuneIntervalCapacity based on time remaining\\n meta.tuneAdjustmentDelay = intervals_[1];\\n meta.debtDecayInterval = intervals_[2];\\n }\\n```\\n\\n`meta.tuneInterval` has impact on `meta.tuneIntervalCapacity`. That means that when you change tuning interval you also change the capacity that is operated during tuning. There is also one more param that depends on this, but is not counted here.\\n```\\n if (\\n (market.capacity < meta.tuneBelowCapacity && timeNeutralCapacity < initialCapacity) ||\\n (time_ >= meta.lastTune + meta.tuneInterval && timeNeutralCapacity > initialCapacity)\\n ) {\\n // Calculate the correct payout to complete on time assuming each bond\\n // will be max size in the desired deposit interval for the remaining time\\n //\\n // i.e. market has 10 days remaining. deposit interval is 1 day. capacity\\n // is 10,000 TOKEN. max payout would be 1,000 TOKEN (10,000 * 1 / 10).\\n markets[id_].maxPayout = capacity.mulDiv(uint256(meta.depositInterval), timeRemaining);\\n\\n\\n // Calculate ideal target debt to satisty capacity in the remaining time\\n // The target debt is based on whether the market is under or oversold at this point in time\\n // This target debt will ensure price is reactive while ensuring the magnitude of being over/undersold\\n // doesn't cause larger fluctuations towards the end of the market.\\n //\\n // Calculate target debt from the timeNeutralCapacity and the ratio of debt decay interval and the length of the market\\n uint256 targetDebt = timeNeutralCapacity.mulDiv(\\n uint256(meta.debtDecayInterval),\\n uint256(meta.length)\\n );\\n\\n\\n // Derive a new control variable from the target debt\\n uint256 controlVariable = terms[id_].controlVariable;\\n uint256 newControlVariable = price_.mulDivUp(market.scale, targetDebt);\\n\\n\\n emit Tuned(id_, controlVariable, newControlVariable);\\n\\n\\n if (newControlVariable < controlVariable) {\\n // If decrease, control variable change will be carried out over the tune interval\\n // this is because price will be lowered\\n uint256 change = controlVariable - newControlVariable;\\n adjustments[id_] = Adjustment(change, time_, meta.tuneAdjustmentDelay, true);\\n } else {\\n // Tune up immediately\\n terms[id_].controlVariable = newControlVariable;\\n // Set current adjustment to inactive (e.g. if we are re-tuning early)\\n adjustments[id_].active = false;\\n }\\n\\n\\n metadata[id_].lastTune = time_;\\n metadata[id_].tuneBelowCapacity = market.capacity > meta.tuneIntervalCapacity\\n ? market.capacity - meta.tuneIntervalCapacity\\n : 0;\\n metadata[id_].lastTuneDebt = targetDebt;\\n }\\n```\\n\\nIf you don't update `meta.tuneBelowCapacity` when changing intervals you have a risk, that price will not be tuned when tuneIntervalCapacity was decreased or it will be still tuned when tuneIntervalCapacity was increased.\\nAs a result tuning will not be completed when needed. | Update meta.tuneBelowCapacity in BondBaseSDA.setIntervals function. | Tuning logic will not be completed when needed. | ```\\n function setIntervals(uint256 id_, uint32[3] calldata intervals_) external override {\\n // Check that the market is live\\n if (!isLive(id_)) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that the intervals are non-zero\\n if (intervals_[0] == 0 || intervals_[1] == 0 || intervals_[2] == 0)\\n revert Auctioneer_InvalidParams();\\n\\n\\n // Check that tuneInterval >= tuneAdjustmentDelay\\n if (intervals_[0] < intervals_[1]) revert Auctioneer_InvalidParams();\\n\\n\\n BondMetadata storage meta = metadata[id_];\\n // Check that tuneInterval >= depositInterval\\n if (intervals_[0] < meta.depositInterval) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that debtDecayInterval >= minDebtDecayInterval\\n if (intervals_[2] < minDebtDecayInterval) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that sender is market owner\\n BondMarket memory market = markets[id_];\\n if (msg.sender != market.owner) revert Auctioneer_OnlyMarketOwner();\\n\\n\\n // Update intervals\\n meta.tuneInterval = intervals_[0];\\n meta.tuneIntervalCapacity = market.capacity.mulDiv(\\n uint256(intervals_[0]),\\n uint256(terms[id_].conclusion) - block.timestamp\\n ); // don't have a stored value for market duration, this will update tuneIntervalCapacity based on time remaining\\n meta.tuneAdjustmentDelay = intervals_[1];\\n meta.debtDecayInterval = intervals_[2];\\n }\\n```\\n |
Existing Circuit Breaker Implementation Allow Faster Taker To Extract Payout Tokens From Market | medium | The current implementation of the circuit breaker is not optimal. Thus, the market maker will lose an excessive amount of payout tokens if a quoted token suddenly loses a large amount of value, even with a circuit breaker in place.\\nWhen the amount of the payout tokens purchased by the taker exceeds the `term.maxDebt`, the taker is still allowed to carry on with the transaction, and the market will only be closed after the current transaction is completed.\\n```\\nFile: BondBaseSDA.sol\\n // Circuit breaker. If max debt is breached, the market is closed\\n if (term.maxDebt < market.totalDebt) {\\n _close(id_);\\n } else {\\n // If market will continue, the control variable is tuned to to expend remaining capacity over remaining market duration\\n _tune(id_, currentTime, price);\\n }\\n```\\n\\nAssume that the state of the SDAM at T0 is as follows:\\n`term.maxDebt` is 110 (debt buffer = 10%)\\n`maxPayout` is 100\\n`market.totalDebt` is 99\\nAssume that the quoted token suddenly loses a large amount of value (e.g. stablecoin depeg causing the quote token to drop to almost zero). Bob decided to purchase as many payout tokens as possible before reaching the `maxPayout` limit to maximize the value he could extract from the market. Assume that Bob is able to purchase 50 bond tokens at T1 before reaching the `maxPayout` limit. As such, the state of the SDAM at T1 will be as follows:\\n`term.maxDebt` = 110\\n`maxPayout` = 100\\n`market.totalDebt` = 99 + 50 = 149\\nIn the above scenario, Bob's purchase has already breached the `term.maxDebt` limit. However, he could still purchase the 50 bond tokens in the current transaction. | Considering only allowing takers to purchase bond tokens up to the `term.maxDebt` limit.\\nFor instance, based on the earlier scenario, only allow Bob to purchase up to 11 bond tokens (term.maxDebt[110] - market.totalDebt[99]) instead of allowing him to purchase 50 bond tokens.\\nIf Bob attempts to purchase 50 bond tokens, the market can proceed to purchase the 11 bond tokens for Bob, and the remaining quote tokens can be refunded back to Bob. After that, since the `term.maxDebt (110) == market.totalDebt (110)`, the market can trigger the circuit breaker to close the market to protect the market from potential extreme market conditions.\\nThis ensures that bond tokens beyond the `term.maxDebt` limit would not be sold to the taker during extreme market conditions. | In the event that the price of the quote token falls to almost zero (e.g. 0.0001 dollars), then the fastest taker will be able to extract as many payout tokens as possible before reaching the `maxPayout` limit from the market. The extracted payout tokens are essentially free for the fastest taker. Taker gain is maker loss.\\nAdditionally, in the event that a quoted token suddenly loses a large amount of value, the amount of payout tokens lost by the market marker is capped at the `maxPayout` limit instead of capping the loss at the `term.maxDebt` limit. This resulted in the market makers losing more payout tokens than expected, and their payout tokens being sold to the takers at a very low price (e.g. 0.0001 dollars).\\nThe market makers will suffer more loss if the `maxPayout` limit of their markets is higher. | ```\\nFile: BondBaseSDA.sol\\n // Circuit breaker. If max debt is breached, the market is closed\\n if (term.maxDebt < market.totalDebt) {\\n _close(id_);\\n } else {\\n // If market will continue, the control variable is tuned to to expend remaining capacity over remaining market duration\\n _tune(id_, currentTime, price);\\n }\\n```\\n |
Market Price Lower Than Expected | medium | The market price does not conform to the specification documented within the whitepaper. As a result, the computed market price is lower than expected.\\nThe following definition of the market price is taken from the whitepaper. Taken from Page 13 of the whitepaper - Definition 25\\n\\nThe integer implementation of the market price must be rounded up per the whitepaper. This ensures that the integer implementation of the market price is greater than or equal to the real value of the market price so as to protect makers from selling tokens at a lower price than expected.\\nWithin the `BondBaseSDA.marketPrice` function, the computation of the market price is rounded up in Line 688, which conforms to the specification.\\n```\\nFile: BondBaseSDA.sol\\n function marketPrice(uint256 id_) public view override returns (uint256) {\\n uint256 price = currentControlVariable(id_).mulDivUp(currentDebt(id_), markets[id_].scale);\\n\\n return (price > markets[id_].minPrice) ? price : markets[id_].minPrice;\\n }\\n```\\n\\nHowever, within the `BondBaseSDA._currentMarketPrice` function, the market price is rounded down, resulting in the makers selling tokens at a lower price than expected.\\n```\\nFile: BondBaseSDA.sol\\n function _currentMarketPrice(uint256 id_) internal view returns (uint256) {\\n BondMarket memory market = markets[id_];\\n return terms[id_].controlVariable.mulDiv(market.totalDebt, market.scale);\\n }\\n```\\n | Ensure the market price is rounded up so that the desired property can be achieved and the makers will not be selling tokens at a lower price than expected.\\n```\\nfunction _currentMarketPrice(uint256 id_) internal view returns (uint256) {\\n BondMarket memory market = markets[id_];\\n// Remove the line below\\n return terms[id_].controlVariable.mulDiv(market.totalDebt, market.scale);\\n// Add the line below\\n return terms[id_].controlVariable.mulDivUp(market.totalDebt, market.scale);\\n}\\n```\\n | Loss for the makers as their tokens are sold at a lower price than expected.\\nAdditionally, the affected `BondBaseSDA._currentMarketPrice` function is used within the `BondBaseSDA._decayAndGetPrice` function to derive the market price. Since a lower market price will be returned, this will lead to a higher amount of payout tokens. Subsequently, the `lastDecayIncrement` will be higher than expected, which will lead to a lower `totalDebt`. Lower debt means a lower market price will be computed later. | ```\\nFile: BondBaseSDA.sol\\n function marketPrice(uint256 id_) public view override returns (uint256) {\\n uint256 price = currentControlVariable(id_).mulDivUp(currentDebt(id_), markets[id_].scale);\\n\\n return (price > markets[id_].minPrice) ? price : markets[id_].minPrice;\\n }\\n```\\n |
Teller Cannot Be Removed From Callback Contract | medium | If a vulnerable Teller is being exploited by an attacker, there is no way for the owner of the Callback Contract to remove the vulnerable Teller from their Callback Contract.\\nThe Callback Contract is missing the feature to remove a Teller. Once a Teller has been added to the whitelist (approvedMarkets mapping), it is not possible to remove the Teller from the whitelist.\\n```\\nFile: BondBaseCallback.sol\\n /* ========== WHITELISTING ========== */\\n\\n /// @inheritdoc IBondCallback\\n function whitelist(address teller_, uint256 id_) external override onlyOwner {\\n // Check that the market id is a valid, live market on the aggregator\\n try _aggregator.isLive(id_) returns (bool live) {\\n if (!live) revert Callback_MarketNotSupported(id_);\\n } catch {\\n revert Callback_MarketNotSupported(id_);\\n }\\n\\n // Check that the provided teller is the teller for the market ID on the stored aggregator\\n // We could pull the teller from the aggregator, but requiring the teller to be passed in\\n // is more explicit about which contract is being whitelisted\\n if (teller_ != address(_aggregator.getTeller(id_))) revert Callback_TellerMismatch();\\n\\n approvedMarkets[teller_][id_] = true;\\n }\\n```\\n | Consider implementing an additional function to allow the removal of a Teller from the whitelist (approvedMarkets mapping), so that a vulnerable Teller can be removed swiftly if needed.\\n```\\nfunction removeFromWhitelist(address teller_, uint256 id_) external override onlyOwner {\\n approvedMarkets[teller_][id_] = false;\\n}\\n```\\n\\nNote: Although the owner of the Callback Contract can DOS its own market by abusing the `removeFromWhitelist` function, no sensible owner would do so. | In the event that a whitelisted Teller is found to be vulnerable and has been actively exploited by an attacker in the wild, the owner of the Callback Contract needs to mitigate the issue swiftly by removing the vulnerable Teller from the Callback Contract to stop it from draining the asset within the Callback Contract. However, the mitigation effort will be hindered by the fact there is no way to remove a Teller within the Callback Contract once it has been whitelisted. Thus, it might not be possible to stop the attacker from exploiting the vulnerable Teller to drain assets within the Callback Contract. The Callback Contract owners would need to find a workaround to block the attack, which will introduce an unnecessary delay to the recovery process where every second counts.\\nAdditionally, if the owner accidentally whitelisted the wrong Teller, there is no way to remove it. | ```\\nFile: BondBaseCallback.sol\\n /* ========== WHITELISTING ========== */\\n\\n /// @inheritdoc IBondCallback\\n function whitelist(address teller_, uint256 id_) external override onlyOwner {\\n // Check that the market id is a valid, live market on the aggregator\\n try _aggregator.isLive(id_) returns (bool live) {\\n if (!live) revert Callback_MarketNotSupported(id_);\\n } catch {\\n revert Callback_MarketNotSupported(id_);\\n }\\n\\n // Check that the provided teller is the teller for the market ID on the stored aggregator\\n // We could pull the teller from the aggregator, but requiring the teller to be passed in\\n // is more explicit about which contract is being whitelisted\\n if (teller_ != address(_aggregator.getTeller(id_))) revert Callback_TellerMismatch();\\n\\n approvedMarkets[teller_][id_] = true;\\n }\\n```\\n |
Create Fee Discount Feature Is Broken | medium | The create fee discount feature is found to be broken within the protocol.\\nThe create fee discount feature relies on the `createFeeDiscount` state variable to determine the fee to be discounted from the protocol fee. However, it was observed that there is no way to initialize the `createFeeDiscount` state variable. As a result, the `createFeeDiscount` state variable will always be zero.\\n```\\nFile: BondFixedExpiryTeller.sol\\n // If fee is greater than the create discount, then calculate the fee and store it\\n // Otherwise, fee is zero.\\n if (protocolFee > createFeeDiscount) {\\n // Calculate fee amount\\n uint256 feeAmount = amount_.mulDiv(protocolFee - createFeeDiscount, FEE_DECIMALS);\\n rewards[_protocol][underlying_] += feeAmount;\\n\\n // Mint new bond tokens\\n bondToken.mint(msg.sender, amount_ - feeAmount);\\n\\n return (bondToken, amount_ - feeAmount);\\n } else {\\n // Mint new bond tokens\\n bondToken.mint(msg.sender, amount_);\\n\\n return (bondToken, amount_);\\n }\\n```\\n\\n```\\nFile: BondFixedTermTeller.sol\\n // If fee is greater than the create discount, then calculate the fee and store it\\n // Otherwise, fee is zero.\\n if (protocolFee > createFeeDiscount) {\\n // Calculate fee amount\\n uint256 feeAmount = amount_.mulDiv(protocolFee - createFeeDiscount, FEE_DECIMALS);\\n rewards[_protocol][underlying_] += feeAmount;\\n\\n // Mint new bond tokens\\n _mintToken(msg.sender, tokenId, amount_ - feeAmount);\\n\\n return (tokenId, amount_ - feeAmount);\\n } else {\\n // Mint new bond tokens\\n _mintToken(msg.sender, tokenId, amount_);\\n\\n return (tokenId, amount_);\\n }\\n```\\n | Implement a setter method for the `createFeeDiscount` state variable and the necessary verification checks.\\n```\\nfunction setCreateFeeDiscount(uint48 createFeeDiscount_) external requiresAuth {\\n if (createFeeDiscount_ > protocolFee) revert Teller_InvalidParams();\\n if (createFeeDiscount_ > 5e3) revert Teller_InvalidParams();\\n createFeeDiscount = createFeeDiscount_;\\n}\\n```\\n | The create fee discount feature is broken within the protocol. There is no way for the protocol team to configure a discount for the users of the `BondFixedExpiryTeller.create` and `BondFixedTermTeller.create` functions. As such, the users will not obtain any discount from the protocol when using the create function. | ```\\nFile: BondFixedExpiryTeller.sol\\n // If fee is greater than the create discount, then calculate the fee and store it\\n // Otherwise, fee is zero.\\n if (protocolFee > createFeeDiscount) {\\n // Calculate fee amount\\n uint256 feeAmount = amount_.mulDiv(protocolFee - createFeeDiscount, FEE_DECIMALS);\\n rewards[_protocol][underlying_] += feeAmount;\\n\\n // Mint new bond tokens\\n bondToken.mint(msg.sender, amount_ - feeAmount);\\n\\n return (bondToken, amount_ - feeAmount);\\n } else {\\n // Mint new bond tokens\\n bondToken.mint(msg.sender, amount_);\\n\\n return (bondToken, amount_);\\n }\\n```\\n |
`BondAggregator.findMarketFor` Function Will Break In Certain Conditions | medium | `BondAggregator.findMarketFor` function will break when the `BondBaseSDA.payoutFor` function within the for-loop reverts under certain conditions.\\nThe `BondBaseSDA.payoutFor` function will revert if the computed payout is larger than the market's max payout. Refer to Line 711 below.\\n```\\nFile: BondBaseSDA.sol\\n function payoutFor(\\n uint256 amount_,\\n uint256 id_,\\n address referrer_\\n ) public view override returns (uint256) {\\n // Calculate the payout for the given amount of tokens\\n uint256 fee = amount_.mulDiv(_teller.getFee(referrer_), 1e5);\\n uint256 payout = (amount_ - fee).mulDiv(markets[id_].scale, marketPrice(id_));\\n\\n // Check that the payout is less than or equal to the maximum payout,\\n // Revert if not, otherwise return the payout\\n if (payout > markets[id_].maxPayout) {\\n revert Auctioneer_MaxPayoutExceeded();\\n } else {\\n return payout;\\n }\\n }\\n```\\n\\nThe `BondAggregator.findMarketFor` function will call the `BondBaseSDA.payoutFor` function at Line 245. The `BondBaseSDA.payoutFor` function will revert if the final computed payout is larger than the `markets[id_].maxPayout` as mentioned earlier. This will cause the entire for-loop to "break" and the transaction to revert.\\nAssume that the user configures the `minAmountOut_` to be `0`, then the condition `minAmountOut_ <= maxPayout` Line 244 will always be true. The `amountIn_` will always be passed to the `payoutFor` function. In some markets where the computed payout is larger than the market's max payout, the `BondAggregator.findMarketFor` function will revert.\\n```\\nFile: BondAggregator.sol\\n /// @inheritdoc IBondAggregator\\n function findMarketFor(\\n address payout_,\\n address quote_,\\n uint256 amountIn_,\\n uint256 minAmountOut_,\\n uint256 maxExpiry_\\n ) external view returns (uint256) {\\n uint256[] memory ids = marketsFor(payout_, quote_);\\n uint256 len = ids.length;\\n uint256[] memory payouts = new uint256[](len);\\n\\n uint256 highestOut;\\n uint256 id = type(uint256).max; // set to max so an empty set doesn't return 0, the first index\\n uint48 vesting;\\n uint256 maxPayout;\\n IBondAuctioneer auctioneer;\\n for (uint256 i; i < len; ++i) {\\n auctioneer = marketsToAuctioneers[ids[i]];\\n (, , , , vesting, maxPayout) = auctioneer.getMarketInfoForPurchase(ids[i]);\\n\\n uint256 expiry = (vesting <= MAX_FIXED_TERM) ? block.timestamp + vesting : vesting;\\n\\n if (expiry <= maxExpiry_) {\\n payouts[i] = minAmountOut_ <= maxPayout\\n ? payoutFor(amountIn_, ids[i], address(0))\\n : 0;\\n\\n if (payouts[i] > highestOut) {\\n highestOut = payouts[i];\\n id = ids[i];\\n }\\n }\\n }\\n\\n return id;\\n }\\n```\\n | Consider using try-catch or address.call to handle the revert of the `BondBaseSDA.payoutFor` function within the for-loop gracefully. This ensures that a single revert of the `BondBaseSDA.payoutFor` function will not affect the entire for-loop within the `BondAggregator.findMarketFor` function. | The find market feature within the protocol is broken under certain conditions. As such, users would not be able to obtain the list of markets that meet their requirements. The market makers affected by this issue will lose the opportunity to sell their bond tokens. | ```\\nFile: BondBaseSDA.sol\\n function payoutFor(\\n uint256 amount_,\\n uint256 id_,\\n address referrer_\\n ) public view override returns (uint256) {\\n // Calculate the payout for the given amount of tokens\\n uint256 fee = amount_.mulDiv(_teller.getFee(referrer_), 1e5);\\n uint256 payout = (amount_ - fee).mulDiv(markets[id_].scale, marketPrice(id_));\\n\\n // Check that the payout is less than or equal to the maximum payout,\\n // Revert if not, otherwise return the payout\\n if (payout > markets[id_].maxPayout) {\\n revert Auctioneer_MaxPayoutExceeded();\\n } else {\\n return payout;\\n }\\n }\\n```\\n |
Auctioneer Cannot Be Removed From The Protocol | medium | If a vulnerable Auctioneer is being exploited by an attacker, there is no way to remove the vulnerable Auctioneer from the protocol.\\nThe protocol is missing the feature to remove an auctioneer. Once an auctioneer has been added to the whitelist, it is not possible to remove the auctioneer from the whitelist.\\n```\\nFile: BondAggregator.sol\\n function registerAuctioneer(IBondAuctioneer auctioneer_) external requiresAuth {\\n // Restricted to authorized addresses\\n\\n // Check that the auctioneer is not already registered\\n if (_whitelist[address(auctioneer_)])\\n revert Aggregator_AlreadyRegistered(address(auctioneer_));\\n\\n // Add the auctioneer to the whitelist\\n auctioneers.push(auctioneer_);\\n _whitelist[address(auctioneer_)] = true;\\n }\\n```\\n | Consider implementing an additional function to allow the removal of an Auctioneer from the whitelist, so that vulnerable Auctioneer can be removed swiftly if needed.\\n```\\nfunction deregisterAuctioneer(IBondAuctioneer auctioneer_) external requiresAuth {\\n // Remove the auctioneer from the whitelist\\n _whitelist[address(auctioneer_)] = false;\\n}\\n```\\n | In the event that a whitelisted Auctioneer is found to be vulnerable and has been actively exploited by an attacker in the wild, the protocol needs to mitigate the issue swiftly by removing the vulnerable Auctioneer from the protocol. However, the mitigation effort will be hindered by the fact there is no way to remove an Auctioneer within the protocol once it has been whitelisted. Thus, it might not be possible to stop the attacker from exploiting the vulnerable Auctioneer. The protocol team would need to find a workaround to block the attack, which will introduce an unnecessary delay to the recovery process where every second counts.\\nAdditionally, if the admin accidentally whitelisted the wrong Auctioneer, there is no way to remove it. | ```\\nFile: BondAggregator.sol\\n function registerAuctioneer(IBondAuctioneer auctioneer_) external requiresAuth {\\n // Restricted to authorized addresses\\n\\n // Check that the auctioneer is not already registered\\n if (_whitelist[address(auctioneer_)])\\n revert Aggregator_AlreadyRegistered(address(auctioneer_));\\n\\n // Add the auctioneer to the whitelist\\n auctioneers.push(auctioneer_);\\n _whitelist[address(auctioneer_)] = true;\\n }\\n```\\n |
Debt Decay Faster Than Expected | medium | The debt decay at a rate faster than expected, causing market makers to sell bond tokens at a lower price than expected.\\nThe following definition of the debt decay reference time following any purchases at time `t` taken from the whitepaper. The second variable, which is the delay increment, is rounded up. Following is taken from Page 15 of the whitepaper - Definition 27\\n\\nHowever, the actual implementation in the codebase differs from the specification. At Line 514, the delay increment is rounded down instead.\\n```\\nFile: BondBaseSDA.sol\\n // Set last decay timestamp based on size of purchase to linearize decay\\n uint256 lastDecayIncrement = debtDecayInterval.mulDiv(payout_, lastTuneDebt);\\n metadata[id_].lastDecay += uint48(lastDecayIncrement);\\n```\\n | When computing the `lastDecayIncrement`, the result should be rounded up.\\n```\\n// Set last decay timestamp based on size of purchase to linearize decay\\n// Remove the line below\\n uint256 lastDecayIncrement = debtDecayInterval.mulDiv(payout_, lastTuneDebt);\\n// Add the line below\\n uint256 lastDecayIncrement = debtDecayInterval.mulDivUp(payout_, lastTuneDebt);\\nmetadata[id_].lastDecay // Add the line below\\n= uint48(lastDecayIncrement);\\n```\\n | When the delay increment (TD) is rounded down, the debt decay reference time increment will be smaller than expected. The debt component will then decay at a faster rate. As a result, the market price will not be adjusted in an optimized manner, and the market price will fall faster than expected, causing market makers to sell bond tokens at a lower price than expected.\\nFollowing is taken from Page 8 of the whitepaper - Definition 8\\n | ```\\nFile: BondBaseSDA.sol\\n // Set last decay timestamp based on size of purchase to linearize decay\\n uint256 lastDecayIncrement = debtDecayInterval.mulDiv(payout_, lastTuneDebt);\\n metadata[id_].lastDecay += uint48(lastDecayIncrement);\\n```\\n |
BondBaseSDA.setDefaults doesn't validate inputs | medium | BondBaseSDA.setDefaults doesn't validate inputs which can lead to initializing new markets incorrectly\\n```\\n function setDefaults(uint32[6] memory defaults_) external override requiresAuth {\\n // Restricted to authorized addresses\\n defaultTuneInterval = defaults_[0];\\n defaultTuneAdjustment = defaults_[1];\\n minDebtDecayInterval = defaults_[2];\\n minDepositInterval = defaults_[3];\\n minMarketDuration = defaults_[4];\\n minDebtBuffer = defaults_[5];\\n }\\n```\\n\\nFunction BondBaseSDA.setDefaults doesn't do any checkings, as you can see. Because of that it's possible to provide values that will break market functionality.\\nFor example you can set `minDepositInterval` to be bigger than `minMarketDuration` and it will be not possible to create new market.\\nOr you can provide `minDebtBuffer` to be 100% ot 0% that will break logic of market closing. | Add input validation. | Can't create new market or market logic will be not working as designed. | ```\\n function setDefaults(uint32[6] memory defaults_) external override requiresAuth {\\n // Restricted to authorized addresses\\n defaultTuneInterval = defaults_[0];\\n defaultTuneAdjustment = defaults_[1];\\n minDebtDecayInterval = defaults_[2];\\n minDepositInterval = defaults_[3];\\n minMarketDuration = defaults_[4];\\n minDebtBuffer = defaults_[5];\\n }\\n```\\n |
BondAggregator.liveMarketsBy eventually will revert because of block gas limit | medium | BondAggregator.liveMarketsBy eventually will revert because of block gas limit\\n```\\n function liveMarketsBy(address owner_) external view returns (uint256[] memory) {\\n uint256 count;\\n IBondAuctioneer auctioneer;\\n for (uint256 i; i < marketCounter; ++i) {\\n auctioneer = marketsToAuctioneers[i];\\n if (auctioneer.isLive(i) && auctioneer.ownerOf(i) == owner_) {\\n ++count;\\n }\\n }\\n\\n\\n uint256[] memory ids = new uint256[](count);\\n count = 0;\\n for (uint256 i; i < marketCounter; ++i) {\\n auctioneer = marketsToAuctioneers[i];\\n if (auctioneer.isLive(i) && auctioneer.ownerOf(i) == owner_) {\\n ids[count] = i;\\n ++count;\\n }\\n }\\n\\n\\n return ids;\\n }\\n```\\n\\nBondAggregator.liveMarketsBy function is looping through all markets and does at least `marketCounter` amount of external calls(when all markets are not live) and at most 4 * `marketCounter` external calls(when all markets are live and owner matches. This all consumes a lot of gas, even that is called from view function. And each new market increases loop size.\\nThat means that after some time `marketsToAuctioneers` mapping will be big enough that the gas amount sent for view/pure function will be not enough to retrieve all data(50 million gas according to this). So the function will revert.\\nAlso similar problem is with `findMarketFor`, `marketsFor` and `liveMarketsFor` functions. | Remove not active markets or some start and end indices to functions. | Functions will always revert and whoever depends on it will not be able to get information. | ```\\n function liveMarketsBy(address owner_) external view returns (uint256[] memory) {\\n uint256 count;\\n IBondAuctioneer auctioneer;\\n for (uint256 i; i < marketCounter; ++i) {\\n auctioneer = marketsToAuctioneers[i];\\n if (auctioneer.isLive(i) && auctioneer.ownerOf(i) == owner_) {\\n ++count;\\n }\\n }\\n\\n\\n uint256[] memory ids = new uint256[](count);\\n count = 0;\\n for (uint256 i; i < marketCounter; ++i) {\\n auctioneer = marketsToAuctioneers[i];\\n if (auctioneer.isLive(i) && auctioneer.ownerOf(i) == owner_) {\\n ids[count] = i;\\n ++count;\\n }\\n }\\n\\n\\n return ids;\\n }\\n```\\n |
meta.tuneBelowCapacity param is not updated when BondBaseSDA.setIntervals is called | medium | When BondBaseSDA.setIntervals function is called then meta.tuneBelowCapacity param is not updated which has impact on price tuning.\\n```\\n function setIntervals(uint256 id_, uint32[3] calldata intervals_) external override {\\n // Check that the market is live\\n if (!isLive(id_)) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that the intervals are non-zero\\n if (intervals_[0] == 0 || intervals_[1] == 0 || intervals_[2] == 0)\\n revert Auctioneer_InvalidParams();\\n\\n\\n // Check that tuneInterval >= tuneAdjustmentDelay\\n if (intervals_[0] < intervals_[1]) revert Auctioneer_InvalidParams();\\n\\n\\n BondMetadata storage meta = metadata[id_];\\n // Check that tuneInterval >= depositInterval\\n if (intervals_[0] < meta.depositInterval) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that debtDecayInterval >= minDebtDecayInterval\\n if (intervals_[2] < minDebtDecayInterval) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that sender is market owner\\n BondMarket memory market = markets[id_];\\n if (msg.sender != market.owner) revert Auctioneer_OnlyMarketOwner();\\n\\n\\n // Update intervals\\n meta.tuneInterval = intervals_[0];\\n meta.tuneIntervalCapacity = market.capacity.mulDiv(\\n uint256(intervals_[0]),\\n uint256(terms[id_].conclusion) - block.timestamp\\n ); // don't have a stored value for market duration, this will update tuneIntervalCapacity based on time remaining\\n meta.tuneAdjustmentDelay = intervals_[1];\\n meta.debtDecayInterval = intervals_[2];\\n }\\n```\\n\\n`meta.tuneInterval` has impact on `meta.tuneIntervalCapacity`. That means that when you change tuning interval you also change the capacity that is operated during tuning. There is also one more param that depends on this, but is not counted here.\\n```\\n if (\\n (market.capacity < meta.tuneBelowCapacity && timeNeutralCapacity < initialCapacity) ||\\n (time_ >= meta.lastTune + meta.tuneInterval && timeNeutralCapacity > initialCapacity)\\n ) {\\n // Calculate the correct payout to complete on time assuming each bond\\n // will be max size in the desired deposit interval for the remaining time\\n //\\n // i.e. market has 10 days remaining. deposit interval is 1 day. capacity\\n // is 10,000 TOKEN. max payout would be 1,000 TOKEN (10,000 * 1 / 10).\\n markets[id_].maxPayout = capacity.mulDiv(uint256(meta.depositInterval), timeRemaining);\\n\\n\\n // Calculate ideal target debt to satisty capacity in the remaining time\\n // The target debt is based on whether the market is under or oversold at this point in time\\n // This target debt will ensure price is reactive while ensuring the magnitude of being over/undersold\\n // doesn't cause larger fluctuations towards the end of the market.\\n //\\n // Calculate target debt from the timeNeutralCapacity and the ratio of debt decay interval and the length of the market\\n uint256 targetDebt = timeNeutralCapacity.mulDiv(\\n uint256(meta.debtDecayInterval),\\n uint256(meta.length)\\n );\\n\\n\\n // Derive a new control variable from the target debt\\n uint256 controlVariable = terms[id_].controlVariable;\\n uint256 newControlVariable = price_.mulDivUp(market.scale, targetDebt);\\n\\n\\n emit Tuned(id_, controlVariable, newControlVariable);\\n\\n\\n if (newControlVariable < controlVariable) {\\n // If decrease, control variable change will be carried out over the tune interval\\n // this is because price will be lowered\\n uint256 change = controlVariable - newControlVariable;\\n adjustments[id_] = Adjustment(change, time_, meta.tuneAdjustmentDelay, true);\\n } else {\\n // Tune up immediately\\n terms[id_].controlVariable = newControlVariable;\\n // Set current adjustment to inactive (e.g. if we are re-tuning early)\\n adjustments[id_].active = false;\\n }\\n\\n\\n metadata[id_].lastTune = time_;\\n metadata[id_].tuneBelowCapacity = market.capacity > meta.tuneIntervalCapacity\\n ? market.capacity - meta.tuneIntervalCapacity\\n : 0;\\n metadata[id_].lastTuneDebt = targetDebt;\\n }\\n```\\n\\nIf you don't update `meta.tuneBelowCapacity` when changing intervals you have a risk, that price will not be tuned when tuneIntervalCapacity was decreased or it will be still tuned when tuneIntervalCapacity was increased.\\nAs a result tuning will not be completed when needed. | Update meta.tuneBelowCapacity in BondBaseSDA.setIntervals function. | Tuning logic will not be completed when needed. | ```\\n function setIntervals(uint256 id_, uint32[3] calldata intervals_) external override {\\n // Check that the market is live\\n if (!isLive(id_)) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that the intervals are non-zero\\n if (intervals_[0] == 0 || intervals_[1] == 0 || intervals_[2] == 0)\\n revert Auctioneer_InvalidParams();\\n\\n\\n // Check that tuneInterval >= tuneAdjustmentDelay\\n if (intervals_[0] < intervals_[1]) revert Auctioneer_InvalidParams();\\n\\n\\n BondMetadata storage meta = metadata[id_];\\n // Check that tuneInterval >= depositInterval\\n if (intervals_[0] < meta.depositInterval) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that debtDecayInterval >= minDebtDecayInterval\\n if (intervals_[2] < minDebtDecayInterval) revert Auctioneer_InvalidParams();\\n\\n\\n // Check that sender is market owner\\n BondMarket memory market = markets[id_];\\n if (msg.sender != market.owner) revert Auctioneer_OnlyMarketOwner();\\n\\n\\n // Update intervals\\n meta.tuneInterval = intervals_[0];\\n meta.tuneIntervalCapacity = market.capacity.mulDiv(\\n uint256(intervals_[0]),\\n uint256(terms[id_].conclusion) - block.timestamp\\n ); // don't have a stored value for market duration, this will update tuneIntervalCapacity based on time remaining\\n meta.tuneAdjustmentDelay = intervals_[1];\\n meta.debtDecayInterval = intervals_[2];\\n }\\n```\\n |
DnGmxJuniorVaultManager#_rebalanceBorrow logic is flawed and could result in vault liquidation | high | DnGmxJuniorVaultManager#_rebalanceBorrow fails to rebalance correctly if only one of the two assets needs a rebalance. In the case where one assets increases rapidly in price while the other stays constant, the vault may be liquidated.\\n```\\n // If both eth and btc swap amounts are not beyond the threshold then no flashloan needs to be executed | case 1\\n if (btcAssetAmount == 0 && ethAssetAmount == 0) return;\\n\\n if (repayDebtBtc && repayDebtEth) {\\n // case where both the token assets are USDC\\n // only one entry required which is combined asset amount for both tokens\\n assets = new address[](1);\\n amounts = new uint256[](1);\\n\\n assets[0] = address(state.usdc);\\n amounts[0] = (btcAssetAmount + ethAssetAmount);\\n } else if (btcAssetAmount == 0 || ethAssetAmount == 0) {\\n // Exactly one would be true since case-1 excluded (both false) | case-2\\n // One token amount = 0 and other token amount > 0\\n // only one entry required for the non-zero amount token\\n assets = new address[](1);\\n amounts = new uint256[](1);\\n\\n if (btcAssetAmount == 0) {\\n assets[0] = (repayDebtBtc ? address(state.usdc) : address(state.wbtc));\\n amounts[0] = btcAssetAmount;\\n } else {\\n assets[0] = (repayDebtEth ? address(state.usdc) : address(state.weth));\\n amounts[0] = ethAssetAmount;\\n }\\n```\\n\\nThe logic above is used to determine what assets to borrow using the flashloan. If the rebalance amount is under a threshold then the assetAmount is set equal to zero. The first check `if (btcAssetAmount == 0 && ethAssetAmount == 0) return;` is a short circuit that returns if neither asset is above the threshold. The third check `else if (btcAssetAmount == 0 || ethAssetAmount == 0)` is the point of interest. Since we short circuit if both are zero then to meet this condition exactly one asset needs to be rebalanced. The logic that follows is where the error is. In the comments it indicates that it needs to enter with the non-zero amount token but the actual logic reflects the opposite. If `btcAssetAmount == 0` it actually tries to enter with wBTC which would be the zero amount asset.\\nThe result of this can be catastrophic for the vault. If one token increases in value rapidly while the other is constant the vault will only ever try to rebalance the one token but because of this logical error it will never actually complete the rebalance. If the token increase in value enough the vault would actually end up becoming liquidated. | Small change to reverse the logic and make it correct:\\n```\\n- if (btcAssetAmount == 0) {\\n+ if (btcAssetAmount != 0) {\\n assets[0] = (repayDebtBtc ? address(state.usdc) : address(state.wbtc));\\n amounts[0] = btcAssetAmount;\\n } else {\\n assets[0] = (repayDebtEth ? address(state.usdc) : address(state.weth));\\n amounts[0] = ethAssetAmount;\\n }\\n```\\n | Vault is unable to rebalance correctly if only one asset needs to be rebalanced, which can lead to the vault being liquidated | ```\\n // If both eth and btc swap amounts are not beyond the threshold then no flashloan needs to be executed | case 1\\n if (btcAssetAmount == 0 && ethAssetAmount == 0) return;\\n\\n if (repayDebtBtc && repayDebtEth) {\\n // case where both the token assets are USDC\\n // only one entry required which is combined asset amount for both tokens\\n assets = new address[](1);\\n amounts = new uint256[](1);\\n\\n assets[0] = address(state.usdc);\\n amounts[0] = (btcAssetAmount + ethAssetAmount);\\n } else if (btcAssetAmount == 0 || ethAssetAmount == 0) {\\n // Exactly one would be true since case-1 excluded (both false) | case-2\\n // One token amount = 0 and other token amount > 0\\n // only one entry required for the non-zero amount token\\n assets = new address[](1);\\n amounts = new uint256[](1);\\n\\n if (btcAssetAmount == 0) {\\n assets[0] = (repayDebtBtc ? address(state.usdc) : address(state.wbtc));\\n amounts[0] = btcAssetAmount;\\n } else {\\n assets[0] = (repayDebtEth ? address(state.usdc) : address(state.weth));\\n amounts[0] = ethAssetAmount;\\n }\\n```\\n |
DnGmxJuniorVaultManager#_totalAssets current implementation doesn't properly maximize or minimize | medium | The maximize input to DnGmxJuniorVaultManager#_totalAssets indicates whether to either maximize or minimize the NAV. Internal logic of the function doesn't accurately reflect that because under some circumstances, maximize = true actually returns a lower value than maximize = false.\\n```\\n uint256 unhedgedGlp = (state.unhedgedGlpInUsdc + dnUsdcDepositedPos).mulDivDown(\\n PRICE_PRECISION,\\n _getGlpPrice(state, !maximize)\\n );\\n\\n // calculate current borrow amounts\\n (uint256 currentBtc, uint256 currentEth) = _getCurrentBorrows(state);\\n uint256 totalCurrentBorrowValue = _getBorrowValue(state, currentBtc, currentEth);\\n\\n // add negative part to current borrow value which will be subtracted at the end\\n // convert usdc amount into glp amount\\n uint256 borrowValueGlp = (totalCurrentBorrowValue + dnUsdcDepositedNeg).mulDivDown(\\n PRICE_PRECISION,\\n _getGlpPrice(state, !maximize)\\n );\\n\\n // if we need to minimize then add additional slippage\\n if (!maximize) unhedgedGlp = unhedgedGlp.mulDivDown(MAX_BPS - state.slippageThresholdGmxBps, MAX_BPS);\\n if (!maximize) borrowValueGlp = borrowValueGlp.mulDivDown(MAX_BPS - state.slippageThresholdGmxBps, MAX_BPS);\\n```\\n\\nTo maximize the estimate for the NAV of the vault underlying debt should minimized and value of held assets should be maximized. Under the current settings there is a mix of both of those and the function doesn't consistently minimize or maximize. Consider when NAV is "maxmized". Under this scenario the value of when estimated the GlpPrice is minimized. This minimizes the value of both the borrowedGlp (debt) and of the unhedgedGlp (assets). The result is that the NAV is not maximized because the value of the assets are also minimized. In this scenario the GlpPrice should be maximized when calculating the assets and minimized when calculating the debt. The reverse should be true when minimizing the NAV. Slippage requirements are also applied incorrectly when adjusting borrowValueGlp. The current implementation implies that if the debt were to be paid back that the vault would repay their debt for less than expected. When paying back debt the slippage should imply paying more than expected rather than less, therefore the slippage should be added rather than subtracted. | To properly maximize the it should assume the best possible rate for exchanging it's assets. Likewise to minimize it should assume it's debt is a large as possible and this it encounters maximum possible slippage when repaying it's debt. I recommend the following changes:\\n```\\n uint256 unhedgedGlp = (state.unhedgedGlpInUsdc + dnUsdcDepositedPos).mulDivDown(\\n PRICE_PRECISION,\\n- _getGlpPrice(state, !maximize)\\n+ _getGlpPrice(state, maximize)\\n );\\n\\n // calculate current borrow amounts\\n (uint256 currentBtc, uint256 currentEth) = _getCurrentBorrows(state);\\n uint256 totalCurrentBorrowValue = _getBorrowValue(state, currentBtc, currentEth);\\n\\n // add negative part to current borrow value which will be subtracted at the end\\n // convert usdc amount into glp amount\\n uint256 borrowValueGlp = (totalCurrentBorrowValue + dnUsdcDepositedNeg).mulDivDown(\\n PRICE_PRECISION,\\n _getGlpPrice(state, !maximize)\\n );\\n\\n // if we need to minimize then add additional slippage\\n if (!maximize) unhedgedGlp = unhedgedGlp.mulDivDown(MAX_BPS - state.slippageThresholdGmxBps, MAX_BPS);\\n- if (!maximize) borrowValueGlp = borrowValueGlp.mulDivDown(MAX_BPS - state.slippageThresholdGmxBps, MAX_BPS);\\n+ if (!maximize) borrowValueGlp = borrowValueGlp.mulDivDown(MAX_BPS + state.slippageThresholdGmxBps, MAX_BPS);\\n```\\n | DnGmxJuniorVaultManager#_totalAssets doesn't accurately reflect NAV. Since this is used when determining critical parameters it may lead to inaccuracies. | ```\\n uint256 unhedgedGlp = (state.unhedgedGlpInUsdc + dnUsdcDepositedPos).mulDivDown(\\n PRICE_PRECISION,\\n _getGlpPrice(state, !maximize)\\n );\\n\\n // calculate current borrow amounts\\n (uint256 currentBtc, uint256 currentEth) = _getCurrentBorrows(state);\\n uint256 totalCurrentBorrowValue = _getBorrowValue(state, currentBtc, currentEth);\\n\\n // add negative part to current borrow value which will be subtracted at the end\\n // convert usdc amount into glp amount\\n uint256 borrowValueGlp = (totalCurrentBorrowValue + dnUsdcDepositedNeg).mulDivDown(\\n PRICE_PRECISION,\\n _getGlpPrice(state, !maximize)\\n );\\n\\n // if we need to minimize then add additional slippage\\n if (!maximize) unhedgedGlp = unhedgedGlp.mulDivDown(MAX_BPS - state.slippageThresholdGmxBps, MAX_BPS);\\n if (!maximize) borrowValueGlp = borrowValueGlp.mulDivDown(MAX_BPS - state.slippageThresholdGmxBps, MAX_BPS);\\n```\\n |
`Staking.unstake()` doesn't decrease the original voting power that was used in `Staking.stake()`. | high | `Staking.unstake()` doesn't decrease the original voting power that was used in `Staking.stake()`.\\nWhen users stake/unstake the underlying NFTs, it calculates the token voting power using getTokenVotingPower() and increases/decreases their voting power accordingly.\\n```\\n function getTokenVotingPower(uint _tokenId) public override view returns (uint) {\\n if (ownerOf(_tokenId) == address(0)) revert NonExistentToken();\\n\\n // If tokenId < 10000, it's a FrankenPunk, so 100/100 = a multiplier of 1\\n uint multiplier = _tokenId < 10_000 ? PERCENT : monsterMultiplier;\\n \\n // evilBonus will return 0 for all FrankenMonsters, as they are not eligible for the evil bonus\\n return ((baseVotes * multiplier) / PERCENT) + stakedTimeBonus[_tokenId] + evilBonus(_tokenId);\\n }\\n```\\n\\nBut `getTokenVotingPower()` uses some parameters like `monsterMultiplier` and `baseVotes` and the output would be changed for the same `tokenId` after the admin changed these settings.\\nCurrently, `_stake()` and `_unstake()` calculates the token voting power independently and the below scenario would be possible.\\nAt the first time, `baseVotes = 20, monsterMultiplier = 50`.\\nA user staked a `FrankenMonsters` and his voting power = 10 here.\\nAfter that, the admin changed `monsterMultiplier = 60`.\\nWhen a user tries to unstake the NFT, the token voting power will be `20 * 60 / 100 = 12` here.\\nSo it will revert with uint underflow here.\\nAfter all, he can't unstake the NFT. | I think we should add a mapping like `tokenVotingPower` to save an original token voting power when users stake the token and decrease the same amount when they unstake. | `votesFromOwnedTokens` might be updated wrongly or users can't unstake for the worst case because it doesn't decrease the same token voting power while unstaking. | ```\\n function getTokenVotingPower(uint _tokenId) public override view returns (uint) {\\n if (ownerOf(_tokenId) == address(0)) revert NonExistentToken();\\n\\n // If tokenId < 10000, it's a FrankenPunk, so 100/100 = a multiplier of 1\\n uint multiplier = _tokenId < 10_000 ? PERCENT : monsterMultiplier;\\n \\n // evilBonus will return 0 for all FrankenMonsters, as they are not eligible for the evil bonus\\n return ((baseVotes * multiplier) / PERCENT) + stakedTimeBonus[_tokenId] + evilBonus(_tokenId);\\n }\\n```\\n |
Staking#_unstake removes votes from wrong person if msg.sender != owner | high | Staking#_unstake allows any msg.sender to unstake tokens for any owner that has approved them. The issue is that even when msg.sender != owner the votes are removed from msg.sender instead of owner. The result is that the owner keeps their votes and msg.sender loses theirs. This could be abused to hijack or damage voting.\\n```\\naddress owner = ownerOf(_tokenId);\\nif (msg.sender != owner && !isApprovedForAll[owner][msg.sender] && msg.sender != getApproved[_tokenId]) revert NotAuthorized();\\n```\\n\\nStaking#_unstake allows any msg.sender to unstake tokens for any owner that has approved them.\\n```\\nuint lostVotingPower;\\nfor (uint i = 0; i < numTokens; i++) {\\n lostVotingPower += _unstakeToken(_tokenIds[i], _to);\\n}\\n\\nvotesFromOwnedTokens[msg.sender] -= lostVotingPower;\\n// Since the delegate currently has the voting power, it must be removed from their balance\\n// If the user doesn't delegate, delegates(msg.sender) will return self\\ntokenVotingPower[getDelegate(msg.sender)] -= lostVotingPower;\\ntotalTokenVotingPower -= lostVotingPower;\\n```\\n\\nAfter looping through _unstakeToken all accumulated votes are removed from msg.sender. The problem with this is that msg.sender is allowed to unstake tokens for users other than themselves and in these cases they will lose votes rather than the user who owns the token.\\nExample: User A and User B both stake tokens and have 10 votes each. User A approves User B to unstake their tokens. User B calls unstake for User A. User B is msg.sender and User A is owner. The votes should be removed from owner but instead are removed from msg.sender. The result is that after unstaking User B has a vote balance of 0 while still having their locked token and User B has a vote balance of 10 and their token back. Now User B is unable to unstake their token because their votes will underflow on unstake, permanently trapping their NFT. | Remove the ability for users to unstake for other users | Votes are removed incorrectly if msg.sender != owner. By extension this would forever trap msg.sender tokens in the contract. | ```\\naddress owner = ownerOf(_tokenId);\\nif (msg.sender != owner && !isApprovedForAll[owner][msg.sender] && msg.sender != getApproved[_tokenId]) revert NotAuthorized();\\n```\\n |
castVote can be called by anyone even those without votes | medium | Governance#castVote can be called by anyone, even users that don't have any votes. Since the voting refund is per address, an adversary could use a large number of addresses to vote with zero votes to drain the vault.\\n```\\nfunction _castVote(address _voter, uint256 _proposalId, uint8 _support) internal returns (uint) {\\n // Only Active proposals can be voted on\\n if (state(_proposalId) != ProposalState.Active) revert InvalidStatus();\\n \\n // Only valid values for _support are 0 (against), 1 (for), and 2 (abstain)\\n if (_support > 2) revert InvalidInput();\\n\\n Proposal storage proposal = proposals[_proposalId];\\n\\n // If the voter has already voted, revert \\n Receipt storage receipt = proposal.receipts[_voter];\\n if (receipt.hasVoted) revert AlreadyVoted();\\n\\n // Calculate the number of votes a user is able to cast\\n // This takes into account delegation and community voting power\\n uint24 votes = (staking.getVotes(_voter)).toUint24();\\n\\n // Update the proposal's total voting records based on the votes\\n if (_support == 0) {\\n proposal.againstVotes = proposal.againstVotes + votes;\\n } else if (_support == 1) {\\n proposal.forVotes = proposal.forVotes + votes;\\n } else if (_support == 2) {\\n proposal.abstainVotes = proposal.abstainVotes + votes;\\n }\\n\\n // Update the user's receipt for this proposal\\n receipt.hasVoted = true;\\n receipt.support = _support;\\n receipt.votes = votes;\\n\\n // Make these updates after the vote so it doesn't impact voting power for this vote.\\n ++totalCommunityScoreData.votes;\\n\\n // We can update the total community voting power with no check because if you can vote, \\n // it means you have votes so you haven't delegated.\\n ++userCommunityScoreData[_voter].votes;\\n\\n return votes;\\n}\\n```\\n\\nNowhere in the flow of voting does the function revert if the user calling it doesn't actually have any votes. staking#getVotes won't revert under any circumstances. Governance#_castVote only reverts if 1) the proposal isn't active 2) support > 2 or 3) if the user has already voted. The result is that any user can vote even if they don't have any votes, allowing users to maliciously burn vault funds by voting and claiming the vote refund. | Governance#_castVote should revert if msg.sender doesn't have any votes:\\n```\\n // Calculate the number of votes a user is able to cast\\n // This takes into account delegation and community voting power\\n uint24 votes = (staking.getVotes(_voter)).toUint24();\\n\\n+ if (votes == 0) revert NoVotes();\\n\\n // Update the proposal's total voting records based on the votes\\n if (_support == 0) {\\n proposal.againstVotes = proposal.againstVotes + votes;\\n } else if (_support == 1) {\\n proposal.forVotes = proposal.forVotes + votes;\\n } else if (_support == 2) {\\n proposal.abstainVotes = proposal.abstainVotes + votes;\\n }\\n```\\n | Vault can be drained maliciously by users with no votes | ```\\nfunction _castVote(address _voter, uint256 _proposalId, uint8 _support) internal returns (uint) {\\n // Only Active proposals can be voted on\\n if (state(_proposalId) != ProposalState.Active) revert InvalidStatus();\\n \\n // Only valid values for _support are 0 (against), 1 (for), and 2 (abstain)\\n if (_support > 2) revert InvalidInput();\\n\\n Proposal storage proposal = proposals[_proposalId];\\n\\n // If the voter has already voted, revert \\n Receipt storage receipt = proposal.receipts[_voter];\\n if (receipt.hasVoted) revert AlreadyVoted();\\n\\n // Calculate the number of votes a user is able to cast\\n // This takes into account delegation and community voting power\\n uint24 votes = (staking.getVotes(_voter)).toUint24();\\n\\n // Update the proposal's total voting records based on the votes\\n if (_support == 0) {\\n proposal.againstVotes = proposal.againstVotes + votes;\\n } else if (_support == 1) {\\n proposal.forVotes = proposal.forVotes + votes;\\n } else if (_support == 2) {\\n proposal.abstainVotes = proposal.abstainVotes + votes;\\n }\\n\\n // Update the user's receipt for this proposal\\n receipt.hasVoted = true;\\n receipt.support = _support;\\n receipt.votes = votes;\\n\\n // Make these updates after the vote so it doesn't impact voting power for this vote.\\n ++totalCommunityScoreData.votes;\\n\\n // We can update the total community voting power with no check because if you can vote, \\n // it means you have votes so you haven't delegated.\\n ++userCommunityScoreData[_voter].votes;\\n\\n return votes;\\n}\\n```\\n |
Delegate can keep can keep delegatee trapped indefinitely | medium | Users are allowed to delegate their votes to other users. Since staking does not implement checkpoints, users are not allowed to delegate or unstake during an active proposal if their delegate has already voted. A malicious delegate can abuse this by creating proposals so that there is always an active proposal and their delegatees are always locked to them.\\n```\\nmodifier lockedWhileVotesCast() {\\n uint[] memory activeProposals = governance.getActiveProposals();\\n for (uint i = 0; i < activeProposals.length; i++) {\\n if (governance.getReceipt(activeProposals[i], getDelegate(msg.sender)).hasVoted) revert TokenLocked();\\n (, address proposer,) = governance.getProposalData(activeProposals[i]);\\n if (proposer == getDelegate(msg.sender)) revert TokenLocked();\\n }\\n _;\\n}\\n```\\n\\nThe above modifier is applied when unstaking or delegating. This reverts if the delegate of msg.sender either has voted or currently has an open proposal. The result is that under those conditions, the delgatee cannot unstake or delegate. A malicious delegate can abuse these conditions to keep their delegatees forever delegated to them. They would keep opening proposals so that delegatees could never unstake or delegate. A single users can only have a one proposal opened at the same time so they would use a secondary account to alternate and always keep an active proposal. | There should be a function to emergency eject the token from staking. To prevent abuse a token that has been emergency ejected should be blacklisted from staking again for a certain cooldown period, such as the length of current voting period. | Delegatees can never unstake or delegate to anyone else | ```\\nmodifier lockedWhileVotesCast() {\\n uint[] memory activeProposals = governance.getActiveProposals();\\n for (uint i = 0; i < activeProposals.length; i++) {\\n if (governance.getReceipt(activeProposals[i], getDelegate(msg.sender)).hasVoted) revert TokenLocked();\\n (, address proposer,) = governance.getProposalData(activeProposals[i]);\\n if (proposer == getDelegate(msg.sender)) revert TokenLocked();\\n }\\n _;\\n}\\n```\\n |
If a user approves junior vault tokens to WithdrawPeriphery, anyone can withdraw/redeem his/her token | high | If users want to withdraw/redeem tokens by WithdrawPeriphery, they should approve token approval to WithdrawPeriphery, then call `withdrawToken()` or `redeemToken()`. But if users approve `dnGmxJuniorVault` to WithdrawPeriphery, anyone can withdraw/redeem his/her token.\\nUsers should approve `dnGmxJuniorVault` before calling `withdrawToken()` or redeemToken():\\n```\\n function withdrawToken(\\n address from,\\n address token,\\n address receiver,\\n uint256 sGlpAmount\\n ) external returns (uint256 amountOut) {\\n // user has approved periphery to use junior vault shares\\n dnGmxJuniorVault.withdraw(sGlpAmount, address(this), from);\\n// rest of code\\n\\n function redeemToken(\\n address from,\\n address token,\\n address receiver,\\n uint256 sharesAmount\\n ) external returns (uint256 amountOut) {\\n // user has approved periphery to use junior vault shares\\n dnGmxJuniorVault.redeem(sharesAmount, address(this), from);\\n// rest of code\\n```\\n\\nFor better user experience, we always use `approve(WithdrawPeriphery, type(uint256).max)`. It means that if Alice approves the max amount, anyone can withdraw/redeem her tokens anytime. Another scenario is that if Alice approves 30 amounts, she wants to call `withdrawToken` to withdraw 30 tokens. But in this case Alice should send two transactions separately, then an attacker can frontrun `withdrawToken` transaction and withdraw Alice's token. | Replace `from` parameter by `msg.sender`.\\n```\\n // user has approved periphery to use junior vault shares\\n dnGmxJuniorVault.withdraw(sGlpAmount, address(this), msg.sender);\\n\\n // user has approved periphery to use junior vault shares\\n dnGmxJuniorVault.redeem(sharesAmount, address(this), msg.sender);\\n```\\n | Attackers can frontrun withdraw/redeem transactions and steal tokens. And some UI always approves max amount, which means that anyone can withdraw users tokens. | ```\\n function withdrawToken(\\n address from,\\n address token,\\n address receiver,\\n uint256 sGlpAmount\\n ) external returns (uint256 amountOut) {\\n // user has approved periphery to use junior vault shares\\n dnGmxJuniorVault.withdraw(sGlpAmount, address(this), from);\\n// rest of code\\n\\n function redeemToken(\\n address from,\\n address token,\\n address receiver,\\n uint256 sharesAmount\\n ) external returns (uint256 amountOut) {\\n // user has approved periphery to use junior vault shares\\n dnGmxJuniorVault.redeem(sharesAmount, address(this), from);\\n// rest of code\\n```\\n |
DnGmxJuniorVaultManager#harvestFees can push junior vault borrowedUSDC above borrow cap and DOS vault | medium | DnGmxJuniorVaultManager#harvestFees grants fees to the senior vault by converting the WETH to USDC and staking it directly. The result is that the senior vault gains value indirectly by increasing the debt of the junior vault. If the junior vault is already at it's borrow cap this will push it's total borrow over the borrow cap causing DnGmxSeniorVault#availableBorrow to underflow and revert. This is called each time a user deposits or withdraws from the junior vault meaning that the junior vault can no longer deposit or withdraw.\\n```\\n if (_seniorVaultWethRewards > state.wethConversionThreshold) {\\n // converts senior tranche share of weth into usdc and deposit into AAVE\\n // Deposit aave vault share to AAVE in usdc\\n uint256 minUsdcAmount = _getTokenPriceInUsdc(state, state.weth).mulDivDown(\\n _seniorVaultWethRewards * (MAX_BPS - state.slippageThresholdSwapEthBps),\\n MAX_BPS * PRICE_PRECISION\\n );\\n // swaps weth into usdc\\n (uint256 aaveUsdcAmount, ) = state._swapToken(\\n address(state.weth),\\n _seniorVaultWethRewards,\\n minUsdcAmount\\n );\\n\\n // supplies usdc into AAVE\\n state._executeSupply(address(state.usdc), aaveUsdcAmount);\\n\\n // resets senior tranche rewards\\n state.seniorVaultWethRewards = 0;\\n```\\n\\nThe above lines converts the WETH owed to the senior vault to USDC and deposits it into Aave. Increasing the aUSDC balance of the junior vault.\\n```\\nfunction getUsdcBorrowed() public view returns (uint256 usdcAmount) {\\n return\\n uint256(\\n state.aUsdc.balanceOf(address(this)).toInt256() -\\n state.dnUsdcDeposited -\\n state.unhedgedGlpInUsdc.toInt256()\\n );\\n}\\n```\\n\\nThe amount of USDC borrowed is calculated based on the amount of aUSDC that the junior vault has. By depositing the fees directly above, the junior vault has effectively "borrowed" more USDC. This can be problematic if the junior vault is already at it's borrow cap.\\n```\\nfunction availableBorrow(address borrower) public view returns (uint256 availableAUsdc) {\\n uint256 availableBasisCap = borrowCaps[borrower] - IBorrower(borrower).getUsdcBorrowed();\\n uint256 availableBasisBalance = aUsdc.balanceOf(address(this));\\n\\n availableAUsdc = availableBasisCap < availableBasisBalance ? availableBasisCap : availableBasisBalance;\\n}\\n```\\n\\nIf the vault is already at it's borrow cap then the line calculating `availableBasisCap` will underflow and revert. | Check if borrowed exceeds borrow cap and return zero to avoid underflow:\\n```\\nfunction availableBorrow(address borrower) public view returns (uint256 availableAUsdc) {\\n\\n+ uint256 borrowCap = borrowCaps[borrower];\\n+ uint256 borrowed = IBorrower(borrower).getUsdcBorrowed();\\n\\n+ if (borrowed > borrowCap) return 0;\\n\\n+ uint256 availableBasisCap = borrowCap - borrowed;\\n\\n- uint256 availableBasisCap = borrowCaps[borrower] - IBorrower(borrower).getUsdcBorrowed();\\n uint256 availableBasisBalance = aUsdc.balanceOf(address(this));\\n\\n availableAUsdc = availableBasisCap < availableBasisBalance ? availableBasisCap : availableBasisBalance;\\n}\\n```\\n | availableBorrow will revert causing deposits/withdraws to revert | ```\\n if (_seniorVaultWethRewards > state.wethConversionThreshold) {\\n // converts senior tranche share of weth into usdc and deposit into AAVE\\n // Deposit aave vault share to AAVE in usdc\\n uint256 minUsdcAmount = _getTokenPriceInUsdc(state, state.weth).mulDivDown(\\n _seniorVaultWethRewards * (MAX_BPS - state.slippageThresholdSwapEthBps),\\n MAX_BPS * PRICE_PRECISION\\n );\\n // swaps weth into usdc\\n (uint256 aaveUsdcAmount, ) = state._swapToken(\\n address(state.weth),\\n _seniorVaultWethRewards,\\n minUsdcAmount\\n );\\n\\n // supplies usdc into AAVE\\n state._executeSupply(address(state.usdc), aaveUsdcAmount);\\n\\n // resets senior tranche rewards\\n state.seniorVaultWethRewards = 0;\\n```\\n |
WithdrawPeriphery#_convertToToken slippage control is broken for any token other than USDC | medium | WithdrawPeriphery allows the user to redeem junior share vaults to any token available on GMX, applying a fixed slippage threshold to all redeems. The slippage calculation always returns the number of tokens to 6 decimals. This works fine for USDC but for other tokens like WETH or WBTC that are 18 decimals the slippage protection is completely ineffective and can lead to loss of funds for users that are withdrawing.\\n```\\nfunction _convertToToken(address token, address receiver) internal returns (uint256 amountOut) {\\n // this value should be whatever glp is received by calling withdraw/redeem to junior vault\\n uint256 outputGlp = fsGlp.balanceOf(address(this));\\n\\n // using min price of glp because giving in glp\\n uint256 glpPrice = _getGlpPrice(false);\\n\\n // using max price of token because taking token out of gmx\\n uint256 tokenPrice = gmxVault.getMaxPrice(token);\\n\\n // apply slippage threshold on top of estimated output amount\\n uint256 minTokenOut = outputGlp.mulDiv(glpPrice * (MAX_BPS - slippageThreshold), tokenPrice * MAX_BPS);\\n\\n // will revert if atleast minTokenOut is not received\\n amountOut = rewardRouter.unstakeAndRedeemGlp(address(token), outputGlp, minTokenOut, receiver);\\n}\\n```\\n\\nWithdrawPeriphery allows the user to redeem junior share vaults to any token available on GMX. To prevent users from losing large amounts of value to MEV the contract applies a fixed percentage slippage. minToken out is returned to 6 decimals regardless of the token being requested. This works for tokens with 6 decimals like USDC, but is completely ineffective for the majority of tokens that aren't. | Adjust minTokenOut to match the decimals of the token:\\n```\\n uint256 minTokenOut = outputGlp.mulDiv(glpPrice * (MAX_BPS - slippageThreshold), tokenPrice * MAX_BPS);\\n+ minTokenOut = minTokenOut * 10 ** (token.decimals() - 6);\\n```\\n | Users withdrawing tokens other than USDC can suffer huge loss of funds due to virtually no slippage protection | ```\\nfunction _convertToToken(address token, address receiver) internal returns (uint256 amountOut) {\\n // this value should be whatever glp is received by calling withdraw/redeem to junior vault\\n uint256 outputGlp = fsGlp.balanceOf(address(this));\\n\\n // using min price of glp because giving in glp\\n uint256 glpPrice = _getGlpPrice(false);\\n\\n // using max price of token because taking token out of gmx\\n uint256 tokenPrice = gmxVault.getMaxPrice(token);\\n\\n // apply slippage threshold on top of estimated output amount\\n uint256 minTokenOut = outputGlp.mulDiv(glpPrice * (MAX_BPS - slippageThreshold), tokenPrice * MAX_BPS);\\n\\n // will revert if atleast minTokenOut is not received\\n amountOut = rewardRouter.unstakeAndRedeemGlp(address(token), outputGlp, minTokenOut, receiver);\\n}\\n```\\n |
WithdrawPeriphery uses incorrect value for MAX_BPS which will allow much higher slippage than intended | medium | WithdrawPeriphery accidentally uses an incorrect value for MAX_BPS which will allow for much higher slippage than intended.\\n```\\nuint256 internal constant MAX_BPS = 1000;\\n```\\n\\nBPS is typically 10,000 and using 1000 is inconsistent with the rest of the ecosystem contracts and tests. The result is that slippage values will be 10x higher than intended. | Correct MAX_BPS:\\n```\\n- uint256 internal constant MAX_BPS = 1000;\\n+ uint256 internal constant MAX_BPS = 10_000;\\n```\\n | Unexpected slippage resulting in loss of user funds, likely due to MEV | ```\\nuint256 internal constant MAX_BPS = 1000;\\n```\\n |
Early depositors to DnGmxSeniorVault can manipulate exchange rates to steal funds from later depositors | medium | To calculate the exchange rate for shares in DnGmxSeniorVault it divides the total supply of shares by the totalAssets of the vault. The first deposit can mint a very small number of shares then donate aUSDC to the vault to grossly manipulate the share price. When later depositor deposit into the vault they will lose value due to precision loss and the adversary will profit.\\n```\\nfunction convertToShares(uint256 assets) public view virtual returns (uint256) {\\n uint256 supply = totalSupply(); // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? assets : assets.mulDivDown(supply, totalAssets());\\n}\\n```\\n\\nShare exchange rate is calculated using the total supply of shares and the totalAsset. This can lead to exchange rate manipulation. As an example, an adversary can mint a single share, then donate 1e8 aUSDC. Minting the first share established a 1:1 ratio but then donating 1e8 changed the ratio to 1:1e8. Now any deposit lower than 1e8 (100 aUSDC) will suffer from precision loss and the attackers share will benefit from it.\\nThis same vector is present in DnGmxJuniorVault. | Initialize should include a small deposit, such as 1e6 aUSDC that mints the share to a dead address to permanently lock the exchange rate:\\n```\\n aUsdc.approve(address(pool), type(uint256).max);\\n IERC20(asset).approve(address(pool), type(uint256).max);\\n\\n+ deposit(1e6, DEAD_ADDRESS);\\n```\\n | Adversary can effectively steal funds from later users | ```\\nfunction convertToShares(uint256 assets) public view virtual returns (uint256) {\\n uint256 supply = totalSupply(); // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? assets : assets.mulDivDown(supply, totalAssets());\\n}\\n```\\n |
The total community voting power is updated incorrectly when a user delegates. | high | When a user delegates their voting power from staked tokens, the total community voting power should be updated. But the update logic is not correct, the the total community voting power could be wrong values.\\n```\\n tokenVotingPower[currentDelegate] -= amount;\\n tokenVotingPower[_delegatee] += amount; \\n\\n // If a user is delegating back to themselves, they regain their community voting power, so adjust totals up\\n if (_delegator == _delegatee) {\\n _updateTotalCommunityVotingPower(_delegator, true);\\n\\n // If a user delegates away their votes, they forfeit their community voting power, so adjust totals down\\n } else if (currentDelegate == _delegator) {\\n _updateTotalCommunityVotingPower(_delegator, false);\\n }\\n```\\n\\nWhen the total community voting power is increased in the first if statement, _delegator's token voting power might be positive already and community voting power might be added to total community voting power before.\\nAlso, currentDelegate's token voting power might be still positive after delegation so we shouldn't remove the communitiy voting power this time. | Add more conditions to check if the msg.sender delegated or not.\\n```\\n if (_delegator == _delegatee) {\\n if(tokenVotingPower[_delegatee] == amount) {\\n _updateTotalCommunityVotingPower(_delegator, true);\\n }\\n if(tokenVotingPower[currentDelegate] == 0) {\\n _updateTotalCommunityVotingPower(currentDelegate, false); \\n }\\n } else if (currentDelegate == _delegator) {\\n if(tokenVotingPower[_delegatee] == amount) {\\n _updateTotalCommunityVotingPower(_delegatee, true);\\n }\\n if(tokenVotingPower[_delegator] == 0) {\\n _updateTotalCommunityVotingPower(_delegator, false); \\n }\\n }\\n```\\n | The total community voting power can be incorrect. | ```\\n tokenVotingPower[currentDelegate] -= amount;\\n tokenVotingPower[_delegatee] += amount; \\n\\n // If a user is delegating back to themselves, they regain their community voting power, so adjust totals up\\n if (_delegator == _delegatee) {\\n _updateTotalCommunityVotingPower(_delegator, true);\\n\\n // If a user delegates away their votes, they forfeit their community voting power, so adjust totals down\\n } else if (currentDelegate == _delegator) {\\n _updateTotalCommunityVotingPower(_delegator, false);\\n }\\n```\\n |
Staking#changeStakeTime and changeStakeAmount are problematic given current staking design | medium | Staking#changeStakeTime and changeStakeAmount allow the locking bonus to be modified. Any change to this value will cause voting imbalance in the system. If changes result in a higher total bonus then existing stakers will be given a permanent advantage over new stakers. If the bonus is increased then existing stakers will be at a disadvantage because they will be locked and unable to realize the new staking bonus.\\n```\\nfunction _stakeToken(uint _tokenId, uint _unlockTime) internal returns (uint) {\\n if (_unlockTime > 0) {\\n unlockTime[_tokenId] = _unlockTime;\\n uint fullStakedTimeBonus = ((_unlockTime - block.timestamp) * stakingSettings.maxStakeBonusAmount) / stakingSettings.maxStakeBonusTime;\\n stakedTimeBonus[_tokenId] = _tokenId < 10000 ? fullStakedTimeBonus : fullStakedTimeBonus / 2;\\n }\\n```\\n\\nWhen a token is staked their stakeTimeBonus is stored. This means that any changes to stakingSettings.maxStakeBonusAmount or stakingSettings.maxStakeBonusTime won't affect tokens that are already stored. Storing the value is essential to prevent changes to the values causing major damage to the voting, but it leads to other more subtle issue when it is changed that will put either existing or new stakers at a disadvantage.\\nExample: User A stake when maxStakeBonusAmount = 10 and stake long enough to get the entire bonus. Now maxStakeBonusAmount is changed to 20. User A is unable to unstake their token right away because it is locked. They are now at a disadvantage because other users can now stake and get a bonus of 20 while they are stuck with only a bonus of 10. Now maxStakeBonusAmount is changed to 5. User A now has an advantage because other users can now only stake for a bonus of 5. If User A never unstakes then they will forever have that advantage over new users. | I recommend implementing a poke function that can be called by any user on any user. This function should loop through all tokens (or the tokens specified) and recalculate their voting power based on current multipliers, allowing all users to be normalized to prevent any abuse. | Voting power becomes skewed for users when Staking#changeStakeTime and changeStakeAmount are used | ```\\nfunction _stakeToken(uint _tokenId, uint _unlockTime) internal returns (uint) {\\n if (_unlockTime > 0) {\\n unlockTime[_tokenId] = _unlockTime;\\n uint fullStakedTimeBonus = ((_unlockTime - block.timestamp) * stakingSettings.maxStakeBonusAmount) / stakingSettings.maxStakeBonusTime;\\n stakedTimeBonus[_tokenId] = _tokenId < 10000 ? fullStakedTimeBonus : fullStakedTimeBonus / 2;\\n }\\n```\\n |
Adversary can abuse delegating to lower quorum | medium | When a user delegates to another user they surrender their community voting power. The quorum threshold for a vote is determined when it is created. Users can artificially lower quorum by delegating to other users then creating a proposal. After it's created they can self delegate and regain all their community voting power to reach quorum easier.\\n```\\n// If a user is delegating back to themselves, they regain their community voting power, so adjust totals up\\nif (_delegator == _delegatee) {\\n _updateTotalCommunityVotingPower(_delegator, true);\\n\\n// If a user delegates away their votes, they forfeit their community voting power, so adjust totals down\\n} else if (currentDelegate == _delegator) {\\n _updateTotalCommunityVotingPower(_delegator, false);\\n}\\n```\\n\\nWhen a user delegates to user other than themselves, they forfeit their community votes and lowers the total number of votes. When they self delegate again they will recover all their community voting power.\\n```\\n newProposal.id = newProposalId.toUint96();\\n newProposal.proposer = msg.sender;\\n newProposal.targets = _targets;\\n newProposal.values = _values;\\n newProposal.signatures = _signatures;\\n newProposal.calldatas = _calldatas;\\n\\n //@audit quorum votes locked at creation\\n\\n newProposal.quorumVotes = quorumVotes().toUint24();\\n newProposal.startTime = (block.timestamp + votingDelay).toUint32();\\n newProposal.endTime = (block.timestamp + votingDelay + votingPeriod).toUint32();\\n```\\n\\nWhen a proposal is created the quorum is locked at the time at which it's created. Users can combine these two quirks to abuse the voting.\\nExample:\\nAssume there is 1000 total votes and quorum is 20%. Assume 5 users each have 35 votes, 10 base votes and 25 community votes. In this scenario quorum is 200 votes which they can't achieve. Each user delegates to other users, reducing each of their votes by 25 and reducing the total number of votes of 875. Now they can create a proposal and quorum will now be 175 votes (875*20%). They all self delegate and recover their community votes. Now they can reach quorum and pass their proposal. | One solution would be to add a vote cooldown to users after they delegate, long enough to make sure all active proposals have expired before they're able to vote. The other option would be to implement checkpoints. | Users can collude to lower quorum and pass proposal easier | ```\\n// If a user is delegating back to themselves, they regain their community voting power, so adjust totals up\\nif (_delegator == _delegatee) {\\n _updateTotalCommunityVotingPower(_delegator, true);\\n\\n// If a user delegates away their votes, they forfeit their community voting power, so adjust totals down\\n} else if (currentDelegate == _delegator) {\\n _updateTotalCommunityVotingPower(_delegator, false);\\n}\\n```\\n |
castVote can be called by anyone even those without votes | medium | Governance#castVote can be called by anyone, even users that don't have any votes. Since the voting refund is per address, an adversary could use a large number of addresses to vote with zero votes to drain the vault.\\n```\\nfunction _castVote(address _voter, uint256 _proposalId, uint8 _support) internal returns (uint) {\\n // Only Active proposals can be voted on\\n if (state(_proposalId) != ProposalState.Active) revert InvalidStatus();\\n \\n // Only valid values for _support are 0 (against), 1 (for), and 2 (abstain)\\n if (_support > 2) revert InvalidInput();\\n\\n Proposal storage proposal = proposals[_proposalId];\\n\\n // If the voter has already voted, revert \\n Receipt storage receipt = proposal.receipts[_voter];\\n if (receipt.hasVoted) revert AlreadyVoted();\\n\\n // Calculate the number of votes a user is able to cast\\n // This takes into account delegation and community voting power\\n uint24 votes = (staking.getVotes(_voter)).toUint24();\\n\\n // Update the proposal's total voting records based on the votes\\n if (_support == 0) {\\n proposal.againstVotes = proposal.againstVotes + votes;\\n } else if (_support == 1) {\\n proposal.forVotes = proposal.forVotes + votes;\\n } else if (_support == 2) {\\n proposal.abstainVotes = proposal.abstainVotes + votes;\\n }\\n\\n // Update the user's receipt for this proposal\\n receipt.hasVoted = true;\\n receipt.support = _support;\\n receipt.votes = votes;\\n\\n // Make these updates after the vote so it doesn't impact voting power for this vote.\\n ++totalCommunityScoreData.votes;\\n\\n // We can update the total community voting power with no check because if you can vote, \\n // it means you have votes so you haven't delegated.\\n ++userCommunityScoreData[_voter].votes;\\n\\n return votes;\\n}\\n```\\n\\nNowhere in the flow of voting does the function revert if the user calling it doesn't actually have any votes. staking#getVotes won't revert under any circumstances. Governance#_castVote only reverts if 1) the proposal isn't active 2) support > 2 or 3) if the user has already voted. The result is that any user can vote even if they don't have any votes, allowing users to maliciously burn vault funds by voting and claiming the vote refund. | Governance#_castVote should revert if msg.sender doesn't have any votes:\\n```\\n // Calculate the number of votes a user is able to cast\\n // This takes into account delegation and community voting power\\n uint24 votes = (staking.getVotes(_voter)).toUint24();\\n\\n+ if (votes == 0) revert NoVotes();\\n\\n // Update the proposal's total voting records based on the votes\\n if (_support == 0) {\\n proposal.againstVotes = proposal.againstVotes + votes;\\n } else if (_support == 1) {\\n proposal.forVotes = proposal.forVotes + votes;\\n } else if (_support == 2) {\\n proposal.abstainVotes = proposal.abstainVotes + votes;\\n }\\n```\\n | Vault can be drained maliciously by users with no votes | ```\\nfunction _castVote(address _voter, uint256 _proposalId, uint8 _support) internal returns (uint) {\\n // Only Active proposals can be voted on\\n if (state(_proposalId) != ProposalState.Active) revert InvalidStatus();\\n \\n // Only valid values for _support are 0 (against), 1 (for), and 2 (abstain)\\n if (_support > 2) revert InvalidInput();\\n\\n Proposal storage proposal = proposals[_proposalId];\\n\\n // If the voter has already voted, revert \\n Receipt storage receipt = proposal.receipts[_voter];\\n if (receipt.hasVoted) revert AlreadyVoted();\\n\\n // Calculate the number of votes a user is able to cast\\n // This takes into account delegation and community voting power\\n uint24 votes = (staking.getVotes(_voter)).toUint24();\\n\\n // Update the proposal's total voting records based on the votes\\n if (_support == 0) {\\n proposal.againstVotes = proposal.againstVotes + votes;\\n } else if (_support == 1) {\\n proposal.forVotes = proposal.forVotes + votes;\\n } else if (_support == 2) {\\n proposal.abstainVotes = proposal.abstainVotes + votes;\\n }\\n\\n // Update the user's receipt for this proposal\\n receipt.hasVoted = true;\\n receipt.support = _support;\\n receipt.votes = votes;\\n\\n // Make these updates after the vote so it doesn't impact voting power for this vote.\\n ++totalCommunityScoreData.votes;\\n\\n // We can update the total community voting power with no check because if you can vote, \\n // it means you have votes so you haven't delegated.\\n ++userCommunityScoreData[_voter].votes;\\n\\n return votes;\\n}\\n```\\n |
[Tomo-M3] Use safeMint instead of mint for ERC721 | medium | Use safeMint instead of mint for ERC721\\nThe `msg.sender` will be minted as a proof of staking NFT when `_stakeToken()` is called.\\nHowever, if `msg.sender` is a contract address that does not support ERC721, the NFT can be frozen in the contract.\\nAs per the documentation of EIP-721:\\nA wallet/broker/auction application MUST implement the wallet interface if it will accept safe transfers.\\nAs per the documentation of ERC721.sol by Openzeppelin\\n```\\n/**\\n * @dev Mints `tokenId` and transfers it to `to`.\\n *\\n * WARNING: Usage of this method is discouraged, use {_safeMint} whenever possible\\n *\\n * Requirements:\\n *\\n * - `tokenId` must not exist.\\n * - `to` cannot be the zero address.\\n *\\n * Emits a {Transfer} event.\\n */\\nfunction _mint(address to, uint256 tokenId) internal virtual {\\n```\\n | Use `safeMint` instead of `mint` to check received address support for ERC721 implementation. | Users possibly lose their NFTs | ```\\n/**\\n * @dev Mints `tokenId` and transfers it to `to`.\\n *\\n * WARNING: Usage of this method is discouraged, use {_safeMint} whenever possible\\n *\\n * Requirements:\\n *\\n * - `tokenId` must not exist.\\n * - `to` cannot be the zero address.\\n *\\n * Emits a {Transfer} event.\\n */\\nfunction _mint(address to, uint256 tokenId) internal virtual {\\n```\\n |
[Medium-1] Hardcoded `monsterMultiplier` in case of `stakedTimeBonus` disregards the updates done to `monsterMultiplier` through `setMonsterMultiplier()` | medium | [Medium-1] Hardcoded `monsterMultiplier` in case of `stakedTimeBonus` disregards the updates done to `monsterMultiplier` through `setMonsterMultiplier()`\\nFrankenDAO allows users to stake two types of NFTs, `Frankenpunks` and `Frankenmonsters` , one of which is considered more valuable, ie: `Frankenpunks`,\\nThis is achieved by reducing votes applicable for `Frankenmonsters` by `monsterMultiplier`.\\n```\\nfunction getTokenVotingPower(uint _tokenId) public override view returns (uint) {\\n if (ownerOf(_tokenId) == address(0)) revert NonExistentToken();\\n\\n // If tokenId < 10000, it's a FrankenPunk, so 100/100 = a multiplier of 1\\n uint multiplier = _tokenId < 10_000 ? PERCENT : monsterMultiplier;\\n \\n // evilBonus will return 0 for all FrankenMonsters, as they are not eligible for the evil bonus\\n return ((baseVotes * multiplier) / PERCENT) + stakedTimeBonus[_tokenId] + evilBonus(_tokenId);\\n }\\n```\\n\\nThis `monsterMultiplier` is initially set as 50 and could be changed by governance proposal.\\n```\\nfunction setMonsterMultiplier(uint _monsterMultiplier) external onlyExecutor {\\n emit MonsterMultiplierChanged(monsterMultiplier = _monsterMultiplier); \\n }\\n```\\n\\nHowever, one piece of code inside the FrakenDAO staking contract doesn't consider this and has a monster multiplier hardcoded.\\n```\\nfunction stake(uint[] calldata _tokenIds, uint _unlockTime) \\n----\\nfunction _stakeToken(uint _tokenId, uint _unlockTime) internal returns (uint) {\\n if (_unlockTime > 0) {\\n --------\\n stakedTimeBonus[_tokenId] = _tokenId < 10000 ? **fullStakedTimeBonus : fullStakedTimeBonus / 2;** \\n }\\n--------\\n```\\n\\nHence any update done to `monsterMultiplier` would not reflect in the calculation of `stakedTimeBonus`, and thereby votes. | Consider replacing the hardcoded value with monsterMultiplier | Any update done to monsterMultiplier would not be reflected in stakedTimeBonus; it would always remain as /2 or 50%.\\nLikelihood: Medium\\nOne needs to pass a governance proposal to change the monster multiplier, so this is definitely not a high likelihood; it's not low as well, as there is a clear provision in spec regarding this. | ```\\nfunction getTokenVotingPower(uint _tokenId) public override view returns (uint) {\\n if (ownerOf(_tokenId) == address(0)) revert NonExistentToken();\\n\\n // If tokenId < 10000, it's a FrankenPunk, so 100/100 = a multiplier of 1\\n uint multiplier = _tokenId < 10_000 ? PERCENT : monsterMultiplier;\\n \\n // evilBonus will return 0 for all FrankenMonsters, as they are not eligible for the evil bonus\\n return ((baseVotes * multiplier) / PERCENT) + stakedTimeBonus[_tokenId] + evilBonus(_tokenId);\\n }\\n```\\n |
`getCommunityVotingPower` doesn't calculate voting Power correctly due to precision loss | medium | In `Staking.sol`, the getCommunityVotingPower function, doesn't calculate the votes correctly due to precision loss.\\nIn getCommunityVotingPower function, the `return` statement is where the mistake lies in:\\n```\\n return \\n (votes * cpMultipliers.votes / PERCENT) + \\n (proposalsCreated * cpMultipliers.proposalsCreated / PERCENT) + \\n (proposalsPassed * cpMultipliers.proposalsPassed / PERCENT);\\n```\\n\\nHere, after each multiplication by the `Multipliers`, we immediately divide it by `PERCENT`. Every time we do a division, there is a certain amount of precision loss. And when its done thrice, the loss just accumulates. So instead, the division by `PERCENT` should be done after all 3 terms are added together.\\nNote that this loss is not there, if the `Multipliers` are a multiple of `PERCENT`. But these values can be changed through governance later. So its better to be careful assuming that they may not always be a multiple of `PERCENT`. | Do the division once after all terms are added together:\\n```\\n return \\n ( (votes * cpMultipliers.votes) + \\n (proposalsCreated * cpMultipliers.proposalsCreated) + \\n (proposalsPassed * cpMultipliers.proposalsPassed) ) / PERCENT;\\n }\\n```\\n | The community voting power of the user is calculated wrongly. | ```\\n return \\n (votes * cpMultipliers.votes / PERCENT) + \\n (proposalsCreated * cpMultipliers.proposalsCreated / PERCENT) + \\n (proposalsPassed * cpMultipliers.proposalsPassed / PERCENT);\\n```\\n |
Delegate can keep can keep delegatee trapped indefinitely | medium | Users are allowed to delegate their votes to other users. Since staking does not implement checkpoints, users are not allowed to delegate or unstake during an active proposal if their delegate has already voted. A malicious delegate can abuse this by creating proposals so that there is always an active proposal and their delegatees are always locked to them.\\n```\\nmodifier lockedWhileVotesCast() {\\n uint[] memory activeProposals = governance.getActiveProposals();\\n for (uint i = 0; i < activeProposals.length; i++) {\\n if (governance.getReceipt(activeProposals[i], getDelegate(msg.sender)).hasVoted) revert TokenLocked();\\n (, address proposer,) = governance.getProposalData(activeProposals[i]);\\n if (proposer == getDelegate(msg.sender)) revert TokenLocked();\\n }\\n _;\\n}\\n```\\n\\nThe above modifier is applied when unstaking or delegating. This reverts if the delegate of msg.sender either has voted or currently has an open proposal. The result is that under those conditions, the delgatee cannot unstake or delegate. A malicious delegate can abuse these conditions to keep their delegatees forever delegated to them. They would keep opening proposals so that delegatees could never unstake or delegate. A single users can only have a one proposal opened at the same time so they would use a secondary account to alternate and always keep an active proposal. | There should be a function to emergency eject the token from staking. To prevent abuse a token that has been emergency ejected should be blacklisted from staking again for a certain cooldown period, such as the length of current voting period. | Delegatees can never unstake or delegate to anyone else | ```\\nmodifier lockedWhileVotesCast() {\\n uint[] memory activeProposals = governance.getActiveProposals();\\n for (uint i = 0; i < activeProposals.length; i++) {\\n if (governance.getReceipt(activeProposals[i], getDelegate(msg.sender)).hasVoted) revert TokenLocked();\\n (, address proposer,) = governance.getProposalData(activeProposals[i]);\\n if (proposer == getDelegate(msg.sender)) revert TokenLocked();\\n }\\n _;\\n}\\n```\\n |
Rounding error when call function `dodoMultiswap()` can lead to revert of transaction or fund of user | medium | The calculation of the proportion when do the split swap in function `_multiSwap` doesn't care about the rounding error\\nThe amount of `midToken` will be transfered to the each adapter can be calculated by formula `curAmount = curTotalAmount * weight / totalWeight`\\n```\\nif (assetFrom[i - 1] == address(this)) {\\n uint256 curAmount = curTotalAmount * curPoolInfo.weight / curTotalWeight;\\n\\n\\n if (curPoolInfo.poolEdition == 1) {\\n //For using transferFrom pool (like dodoV1, Curve), pool call transferFrom function to get tokens from adapter\\n IERC20(midToken[i]).transfer(curPoolInfo.adapter, curAmount);\\n } else {\\n //For using transfer pool (like dodoV2), pool determine swapAmount through balanceOf(Token) - reserve\\n IERC20(midToken[i]).transfer(curPoolInfo.pool, curAmount);\\n }\\n}\\n```\\n\\nIt will lead to some scenarios when `curTotalAmount * curPoolInfo.weight` is not divisible by `curTotalWeight`, there will be some token left after the swap.\\nFor some tx, if user set a `minReturnAmount` strictly, it may incur the reversion. For some token with small decimal and high value, it can make a big loss for the sender. | Add a accumulation variable to maintain the total amount is transfered after each split swap. In the last split swap, instead of calculating the `curAmount` by formula above, just take the remaining amount to swap. | Revert the transaction because not enough amount of `toToken`\\nSender can lose a small amount of tokens | ```\\nif (assetFrom[i - 1] == address(this)) {\\n uint256 curAmount = curTotalAmount * curPoolInfo.weight / curTotalWeight;\\n\\n\\n if (curPoolInfo.poolEdition == 1) {\\n //For using transferFrom pool (like dodoV1, Curve), pool call transferFrom function to get tokens from adapter\\n IERC20(midToken[i]).transfer(curPoolInfo.adapter, curAmount);\\n } else {\\n //For using transfer pool (like dodoV2), pool determine swapAmount through balanceOf(Token) - reserve\\n IERC20(midToken[i]).transfer(curPoolInfo.pool, curAmount);\\n }\\n}\\n```\\n |
Issue when handling native ETH trade and WETH trade in DODO RouterProxy#externalSwap | medium | Lack of logic to wrap the native ETH to WETH in function externalSwap\\nThe function exeternalSwap can handle external swaps with 0x, 1inch and paraswap or other external resources.\\n```\\n function externalSwap(\\n address fromToken,\\n address toToken,\\n address approveTarget,\\n address swapTarget,\\n uint256 fromTokenAmount,\\n uint256 minReturnAmount,\\n bytes memory feeData,\\n bytes memory callDataConcat,\\n uint256 deadLine\\n ) external payable judgeExpired(deadLine) returns (uint256 receiveAmount) { \\n require(isWhiteListedContract[swapTarget], "DODORouteProxy: Not Whitelist Contract"); \\n require(isApproveWhiteListedContract[approveTarget], "DODORouteProxy: Not Whitelist Appprove Contract"); \\n\\n // transfer in fromToken\\n if (fromToken != _ETH_ADDRESS_) {\\n // approve if needed\\n if (approveTarget != address(0)) {\\n IERC20(fromToken).universalApproveMax(approveTarget, fromTokenAmount);\\n }\\n\\n IDODOApproveProxy(_DODO_APPROVE_PROXY_).claimTokens(\\n fromToken,\\n msg.sender,\\n address(this),\\n fromTokenAmount\\n );\\n }\\n\\n // swap\\n uint256 toTokenOriginBalance;\\n if(toToken != _ETH_ADDRESS_) {\\n toTokenOriginBalance = IERC20(toToken).universalBalanceOf(address(this));\\n } else {\\n toTokenOriginBalance = IERC20(_WETH_).universalBalanceOf(address(this));\\n }\\n```\\n\\nnote the code above, if the fromToken is set to _ETH_ADDRESS, indicating the user wants to trade with native ETH pair. the function does has payable modifier and user can send ETH along when calling this function.\\nHowever, the toTokenOriginBalance is check the only WETH balance instead of ETH balance.\\n```\\n if(toToken != _ETH_ADDRESS_) {\\n toTokenOriginBalance = IERC20(toToken).universalBalanceOf(address(this));\\n } else {\\n toTokenOriginBalance = IERC20(_WETH_).universalBalanceOf(address(this));\\n }\\n```\\n\\nThen we do the swap:\\n```\\n(bool success, bytes memory result) = swapTarget.call{\\n value: fromToken == _ETH_ADDRESS_ ? fromTokenAmount : 0\\n}(callDataConcat);\\n```\\n\\nIf the fromToken is _ETH_ADDRESS, we send the user supplied fromTokenAmount without verifying that the fromTokenAmount.\\nFinally, we use the before and after balance to get the amount with received.\\n```\\n// calculate toToken amount\\n if(toToken != _ETH_ADDRESS_) {\\n receiveAmount = IERC20(toToken).universalBalanceOf(address(this)) - (\\n toTokenOriginBalance\\n );\\n } else {\\n receiveAmount = IERC20(_WETH_).universalBalanceOf(address(this)) - (\\n toTokenOriginBalance\\n );\\n }\\n```\\n\\nWe are checking the WETH amount instead of ETH amount again.\\nThe issue is that some trades may settle the trade in native ETH, for example\\nwe can look into the Paraswap contract\\nIf we click the implementation contract and see the method swapOnUniswapV2Fork\\nCode line 927 - 944, which calls the function\\n```\\nfunction swapOnUniswapV2Fork(\\n address tokenIn,\\n uint256 amountIn,\\n uint256 amountOutMin,\\n address weth,\\n uint256[] calldata pools\\n)\\n external\\n payable\\n{\\n _swap(\\n tokenIn,\\n amountIn,\\n amountOutMin,\\n weth,\\n pools\\n );\\n}\\n```\\n\\nwhich calls:\\n```\\n function _swap(\\n address tokenIn,\\n uint256 amountIn,\\n uint256 amountOutMin,\\n address weth,\\n uint256[] memory pools\\n )\\n private\\n returns (uint256 tokensBought)\\n {\\n uint256 pairs = pools.length;\\n\\n require(pairs != 0, "At least one pool required");\\n\\n bool tokensBoughtEth;\\n\\n if (tokenIn == ETH_IDENTIFIER) {\\n require(amountIn == msg.value, "Incorrect msg.value");\\n IWETH(weth).deposit{value: msg.value}();\\n require(IWETH(weth).transfer(address(pools[0]), msg.value));\\n } else {\\n require(msg.value == 0, "Incorrect msg.value");\\n transferTokens(tokenIn, msg.sender, address(pools[0]), amountIn);\\n tokensBoughtEth = weth != address(0);\\n }\\n\\n tokensBought = amountIn;\\n\\n for (uint256 i = 0; i < pairs; ++i) {\\n uint256 p = pools[i];\\n address pool = address(p);\\n bool direction = p & DIRECTION_FLAG == 0;\\n\\n tokensBought = NewUniswapV2Lib.getAmountOut(\\n tokensBought, pool, direction, p FEE_OFFSET\\n );\\n (uint256 amount0Out, uint256 amount1Out) = direction\\n ? (uint256(0), tokensBought) : (tokensBought, uint256(0));\\n IUniswapV2Pair(pool).swap(\\n amount0Out,\\n amount1Out,\\n i + 1 == pairs\\n ? (tokensBoughtEth ? address(this) : msg.sender)\\n : address(pools[i + 1]),\\n ""\\n );\\n }\\n\\n if (tokensBoughtEth) {\\n IWETH(weth).withdraw(tokensBought);\\n TransferHelper.safeTransferETH(msg.sender, tokensBought);\\n }\\n\\n require(tokensBought >= amountOutMin, "UniswapV2Router: INSUFFICIENT_OUTPUT_AMOUNT");\\n }\\n```\\n\\nas can clearly see, the code first receive ETH, wrap ETH to WETH, then instead end, unwrap the WETH to ETH and the send the ETH back to complete the trade.\\n```\\nif (tokensBoughtEth) {\\n IWETH(weth).withdraw(tokensBought);\\n TransferHelper.safeTransferETH(msg.sender, tokensBought);\\n}\\n```\\n\\nIn DODORouterProxy.sol#ExternalSwap however, we are using WETH balance before and after to check the received amount,\\nbut if we call swapOnUniswapV2Fork on Paraswap router, the balance change for WETH would be 0\\nbecause as we see above, the method on paraswap side wrap ETH to WETH but in the end unwrap WETH and send ETH back.\\nThere is also a lack of a method to wrap the ETH to WETH before the trade. making the ETH-related order not tradeable. | Issue Issue when handling native ETH trade and WETH trade in DODO RouterProxy#externalSwap\\nWe recommend the project change from\\n```\\n // swap\\n uint256 toTokenOriginBalance;\\n if(toToken != _ETH_ADDRESS_) {\\n toTokenOriginBalance = IERC20(toToken).universalBalanceOf(address(this));\\n } else {\\n toTokenOriginBalance = IERC20(_WETH_).universalBalanceOf(address(this));\\n }\\n```\\n\\n```\\n // swap\\n uint256 toTokenOriginBalance;\\n if(toToken != _ETH_ADDRESS_) {\\n toTokenOriginBalance = IERC20(toToken).universalBalanceOf(address(this));\\n } else {\\n toTokenOriginBalance = IERC20(_ETH_ADDRESS).universalBalanceOf(address(this));\\n }\\n```\\n\\nIf we want to use WETH to do the balance check, we can help the user wrap the ETH to WETH by calling before do the balance check.\\n```\\nIWETH(_WETH_).deposit(receiveAmount);\\n```\\n\\nIf we want to use WETH as the reference to trade, we also need to approve external contract to spend our WETH.\\nWe can add\\n```\\nif(fromToken == _ETH_ADDRESS) {\\n IERC20(_WETH_).universalApproveMax(approveTarget, fromTokenAmount);\\n}\\n```\\n\\nWe also need to verify the fromTokenAmount for\\n```\\n(bool success, bytes memory result) = swapTarget.call{\\n value: fromToken == _ETH_ADDRESS_ ? fromTokenAmount : 0\\n}(callDataConcat);\\n```\\n\\nwe can add the check:\\n```\\nrequire(msg.value == fromTokenAmount, "invalid ETH amount");\\n```\\n | A lot of method that does not use WETH to settle the trade will not be callable. | ```\\n function externalSwap(\\n address fromToken,\\n address toToken,\\n address approveTarget,\\n address swapTarget,\\n uint256 fromTokenAmount,\\n uint256 minReturnAmount,\\n bytes memory feeData,\\n bytes memory callDataConcat,\\n uint256 deadLine\\n ) external payable judgeExpired(deadLine) returns (uint256 receiveAmount) { \\n require(isWhiteListedContract[swapTarget], "DODORouteProxy: Not Whitelist Contract"); \\n require(isApproveWhiteListedContract[approveTarget], "DODORouteProxy: Not Whitelist Appprove Contract"); \\n\\n // transfer in fromToken\\n if (fromToken != _ETH_ADDRESS_) {\\n // approve if needed\\n if (approveTarget != address(0)) {\\n IERC20(fromToken).universalApproveMax(approveTarget, fromTokenAmount);\\n }\\n\\n IDODOApproveProxy(_DODO_APPROVE_PROXY_).claimTokens(\\n fromToken,\\n msg.sender,\\n address(this),\\n fromTokenAmount\\n );\\n }\\n\\n // swap\\n uint256 toTokenOriginBalance;\\n if(toToken != _ETH_ADDRESS_) {\\n toTokenOriginBalance = IERC20(toToken).universalBalanceOf(address(this));\\n } else {\\n toTokenOriginBalance = IERC20(_WETH_).universalBalanceOf(address(this));\\n }\\n```\\n |
Issue when handling native ETH trade and WETH trade in DODO RouterProxy#externalSwap | medium | Lack of logic to wrap the native ETH to WETH in function externalSwap\\nThe function exeternalSwap can handle external swaps with 0x, 1inch and paraswap or other external resources.\\n```\\n function externalSwap(\\n address fromToken,\\n address toToken,\\n address approveTarget,\\n address swapTarget,\\n uint256 fromTokenAmount,\\n uint256 minReturnAmount,\\n bytes memory feeData,\\n bytes memory callDataConcat,\\n uint256 deadLine\\n ) external payable judgeExpired(deadLine) returns (uint256 receiveAmount) { \\n require(isWhiteListedContract[swapTarget], "DODORouteProxy: Not Whitelist Contract"); \\n require(isApproveWhiteListedContract[approveTarget], "DODORouteProxy: Not Whitelist Appprove Contract"); \\n\\n // transfer in fromToken\\n if (fromToken != _ETH_ADDRESS_) {\\n // approve if needed\\n if (approveTarget != address(0)) {\\n IERC20(fromToken).universalApproveMax(approveTarget, fromTokenAmount);\\n }\\n\\n IDODOApproveProxy(_DODO_APPROVE_PROXY_).claimTokens(\\n fromToken,\\n msg.sender,\\n address(this),\\n fromTokenAmount\\n );\\n }\\n\\n // swap\\n uint256 toTokenOriginBalance;\\n if(toToken != _ETH_ADDRESS_) {\\n toTokenOriginBalance = IERC20(toToken).universalBalanceOf(address(this));\\n } else {\\n toTokenOriginBalance = IERC20(_WETH_).universalBalanceOf(address(this));\\n }\\n```\\n\\nnote the code above, if the fromToken is set to _ETH_ADDRESS, indicating the user wants to trade with native ETH pair. the function does has payable modifier and user can send ETH along when calling this function.\\nHowever, the toTokenOriginBalance is check the only WETH balance instead of ETH balance.\\n```\\n if(toToken != _ETH_ADDRESS_) {\\n toTokenOriginBalance = IERC20(toToken).universalBalanceOf(address(this));\\n } else {\\n toTokenOriginBalance = IERC20(_WETH_).universalBalanceOf(address(this));\\n }\\n```\\n\\nThen we do the swap:\\n```\\n(bool success, bytes memory result) = swapTarget.call{\\n value: fromToken == _ETH_ADDRESS_ ? fromTokenAmount : 0\\n}(callDataConcat);\\n```\\n\\nIf the fromToken is _ETH_ADDRESS, we send the user supplied fromTokenAmount without verifying that the fromTokenAmount.\\nFinally, we use the before and after balance to get the amount with received.\\n```\\n// calculate toToken amount\\n if(toToken != _ETH_ADDRESS_) {\\n receiveAmount = IERC20(toToken).universalBalanceOf(address(this)) - (\\n toTokenOriginBalance\\n );\\n } else {\\n receiveAmount = IERC20(_WETH_).universalBalanceOf(address(this)) - (\\n toTokenOriginBalance\\n );\\n }\\n```\\n\\nWe are checking the WETH amount instead of ETH amount again.\\nThe issue is that some trades may settle the trade in native ETH, for example\\nwe can look into the Paraswap contract\\nIf we click the implementation contract and see the method swapOnUniswapV2Fork\\nCode line 927 - 944, which calls the function\\n```\\nfunction swapOnUniswapV2Fork(\\n address tokenIn,\\n uint256 amountIn,\\n uint256 amountOutMin,\\n address weth,\\n uint256[] calldata pools\\n)\\n external\\n payable\\n{\\n _swap(\\n tokenIn,\\n amountIn,\\n amountOutMin,\\n weth,\\n pools\\n );\\n}\\n```\\n\\nwhich calls:\\n```\\n function _swap(\\n address tokenIn,\\n uint256 amountIn,\\n uint256 amountOutMin,\\n address weth,\\n uint256[] memory pools\\n )\\n private\\n returns (uint256 tokensBought)\\n {\\n uint256 pairs = pools.length;\\n\\n require(pairs != 0, "At least one pool required");\\n\\n bool tokensBoughtEth;\\n\\n if (tokenIn == ETH_IDENTIFIER) {\\n require(amountIn == msg.value, "Incorrect msg.value");\\n IWETH(weth).deposit{value: msg.value}();\\n require(IWETH(weth).transfer(address(pools[0]), msg.value));\\n } else {\\n require(msg.value == 0, "Incorrect msg.value");\\n transferTokens(tokenIn, msg.sender, address(pools[0]), amountIn);\\n tokensBoughtEth = weth != address(0);\\n }\\n\\n tokensBought = amountIn;\\n\\n for (uint256 i = 0; i < pairs; ++i) {\\n uint256 p = pools[i];\\n address pool = address(p);\\n bool direction = p & DIRECTION_FLAG == 0;\\n\\n tokensBought = NewUniswapV2Lib.getAmountOut(\\n tokensBought, pool, direction, p FEE_OFFSET\\n );\\n (uint256 amount0Out, uint256 amount1Out) = direction\\n ? (uint256(0), tokensBought) : (tokensBought, uint256(0));\\n IUniswapV2Pair(pool).swap(\\n amount0Out,\\n amount1Out,\\n i + 1 == pairs\\n ? (tokensBoughtEth ? address(this) : msg.sender)\\n : address(pools[i + 1]),\\n ""\\n );\\n }\\n\\n if (tokensBoughtEth) {\\n IWETH(weth).withdraw(tokensBought);\\n TransferHelper.safeTransferETH(msg.sender, tokensBought);\\n }\\n\\n require(tokensBought >= amountOutMin, "UniswapV2Router: INSUFFICIENT_OUTPUT_AMOUNT");\\n }\\n```\\n\\nas can clearly see, the code first receive ETH, wrap ETH to WETH, then instead end, unwrap the WETH to ETH and the send the ETH back to complete the trade.\\n```\\nif (tokensBoughtEth) {\\n IWETH(weth).withdraw(tokensBought);\\n TransferHelper.safeTransferETH(msg.sender, tokensBought);\\n}\\n```\\n\\nIn DODORouterProxy.sol#ExternalSwap however, we are using WETH balance before and after to check the received amount,\\nbut if we call swapOnUniswapV2Fork on Paraswap router, the balance change for WETH would be 0\\nbecause as we see above, the method on paraswap side wrap ETH to WETH but in the end unwrap WETH and send ETH back.\\nThere is also a lack of a method to wrap the ETH to WETH before the trade. making the ETH-related order not tradeable. | We recommend the project change from\\n```\\n // swap\\n uint256 toTokenOriginBalance;\\n if(toToken != _ETH_ADDRESS_) {\\n toTokenOriginBalance = IERC20(toToken).universalBalanceOf(address(this));\\n } else {\\n toTokenOriginBalance = IERC20(_WETH_).universalBalanceOf(address(this));\\n }\\n```\\n\\n```\\n // swap\\n uint256 toTokenOriginBalance;\\n if(toToken != _ETH_ADDRESS_) {\\n toTokenOriginBalance = IERC20(toToken).universalBalanceOf(address(this));\\n } else {\\n toTokenOriginBalance = IERC20(_ETH_ADDRESS).universalBalanceOf(address(this));\\n }\\n```\\n\\nIf we want to use WETH to do the balance check, we can help the user wrap the ETH to WETH by calling before do the balance check.\\n```\\nIWETH(_WETH_).deposit(receiveAmount);\\n```\\n\\nIf we want to use WETH as the reference to trade, we also need to approve external contract to spend our WETH.\\nWe can add\\n```\\nif(fromToken == _ETH_ADDRESS) {\\n IERC20(_WETH_).universalApproveMax(approveTarget, fromTokenAmount);\\n}\\n```\\n\\nWe also need to verify the fromTokenAmount for\\n```\\n(bool success, bytes memory result) = swapTarget.call{\\n value: fromToken == _ETH_ADDRESS_ ? fromTokenAmount : 0\\n}(callDataConcat);\\n```\\n\\nwe can add the check:\\n```\\nrequire(msg.value == fromTokenAmount, "invalid ETH amount");\\n```\\n | A lot of method that does not use WETH to settle the trade will not be callable. | ```\\n function externalSwap(\\n address fromToken,\\n address toToken,\\n address approveTarget,\\n address swapTarget,\\n uint256 fromTokenAmount,\\n uint256 minReturnAmount,\\n bytes memory feeData,\\n bytes memory callDataConcat,\\n uint256 deadLine\\n ) external payable judgeExpired(deadLine) returns (uint256 receiveAmount) { \\n require(isWhiteListedContract[swapTarget], "DODORouteProxy: Not Whitelist Contract"); \\n require(isApproveWhiteListedContract[approveTarget], "DODORouteProxy: Not Whitelist Appprove Contract"); \\n\\n // transfer in fromToken\\n if (fromToken != _ETH_ADDRESS_) {\\n // approve if needed\\n if (approveTarget != address(0)) {\\n IERC20(fromToken).universalApproveMax(approveTarget, fromTokenAmount);\\n }\\n\\n IDODOApproveProxy(_DODO_APPROVE_PROXY_).claimTokens(\\n fromToken,\\n msg.sender,\\n address(this),\\n fromTokenAmount\\n );\\n }\\n\\n // swap\\n uint256 toTokenOriginBalance;\\n if(toToken != _ETH_ADDRESS_) {\\n toTokenOriginBalance = IERC20(toToken).universalBalanceOf(address(this));\\n } else {\\n toTokenOriginBalance = IERC20(_WETH_).universalBalanceOf(address(this));\\n }\\n```\\n |
AutoRoller#eject can be used to steal all the yield from vault's YTs | high | AutoRoller#eject collects all the current yield of the YTs, combines the users share of the PTs and YTs then sends the user the entire target balance of the contract. The problem is that combine claims the yield for ALL YTs, which sends the AutoRoller target assets. Since it sends the user the entire target balance of the contract it accidentally sends the user the yield from all the pool's YTs.\\n```\\nfunction eject(\\n uint256 shares,\\n address receiver,\\n address owner\\n) public returns (uint256 assets, uint256 excessBal, bool isExcessPTs) {\\n\\n // rest of code\\n\\n //@audit call of interest\\n (excessBal, isExcessPTs) = _exitAndCombine(shares);\\n\\n _burn(owner, shares); // Burn after percent ownership is determined in _exitAndCombine.\\n\\n if (isExcessPTs) {\\n pt.transfer(receiver, excessBal);\\n } else {\\n yt.transfer(receiver, excessBal);\\n }\\n\\n //@audit entire asset (adapter.target) balance transferred to caller, which includes collected YT yield and combined\\n asset.transfer(receiver, assets = asset.balanceOf(address(this)));\\n\\n emit Ejected(msg.sender, receiver, owner, assets, shares,\\n isExcessPTs ? excessBal : 0,\\n isExcessPTs ? 0 : excessBal\\n );\\n}\\n\\nfunction _exitAndCombine(uint256 shares) internal returns (uint256, bool) {\\n uint256 supply = totalSupply; // Save extra SLOAD.\\n\\n uint256 lpBal = shares.mulDivDown(space.balanceOf(address(this)), supply);\\n uint256 totalPTBal = pt.balanceOf(address(this));\\n uint256 ptShare = shares.mulDivDown(totalPTBal, supply);\\n\\n // rest of code\\n\\n uint256 ytBal = shares.mulDivDown(yt.balanceOf(address(this)), supply);\\n ptShare += pt.balanceOf(address(this)) - totalPTBal;\\n\\n unchecked {\\n // Safety: an inequality check is done before subtraction.\\n if (ptShare > ytBal) {\\n\\n //@audit call of interest\\n divider.combine(address(adapter), maturity, ytBal);\\n return (ptShare - ytBal, true);\\n } else { // Set excess PTs to false if the balances are exactly equal.\\n divider.combine(address(adapter), maturity, ptShare);\\n return (ytBal - ptShare, false);\\n }\\n }\\n}\\n```\\n\\nEject allows the user to leave the liquidity pool by withdrawing their liquidity from the Balancer pool and combining the PTs and YTs via divider.combine.\\n```\\nfunction combine(\\n address adapter,\\n uint256 maturity,\\n uint256 uBal\\n) external nonReentrant whenNotPaused returns (uint256 tBal) {\\n if (!adapterMeta[adapter].enabled) revert Errors.InvalidAdapter();\\n if (!_exists(adapter, maturity)) revert Errors.SeriesDoesNotExist();\\n\\n uint256 level = adapterMeta[adapter].level;\\n if (level.combineRestricted() && msg.sender != adapter) revert Errors.CombineRestricted();\\n\\n // Burn the PT\\n Token(series[adapter][maturity].pt).burn(msg.sender, uBal);\\n\\n //@audit call of interest\\n uint256 collected = _collect(msg.sender, adapter, maturity, uBal, uBal, address(0));\\n\\n // rest of code\\n\\n // Convert from units of Underlying to units of Target\\n tBal = uBal.fdiv(cscale);\\n ERC20(Adapter(adapter).target()).safeTransferFrom(adapter, msg.sender, tBal);\\n\\n // Notify only when Series is not settled as when it is, the _collect() call above would trigger a _redeemYT which will call notify\\n if (!settled) Adapter(adapter).notify(msg.sender, tBal, false);\\n unchecked {\\n // Safety: bounded by the Target's total token supply\\n tBal += collected;\\n }\\n emit Combined(adapter, maturity, tBal, msg.sender);\\n}\\n```\\n\\n```\\nfunction _collect(\\n address usr,\\n address adapter,\\n uint256 maturity,\\n uint256 uBal,\\n uint256 uBalTransfer,\\n address to\\n) internal returns (uint256 collected) {\\n if (!_exists(adapter, maturity)) revert Errors.SeriesDoesNotExist();\\n\\n if (!adapterMeta[adapter].enabled && !_settled(adapter, maturity)) revert Errors.InvalidAdapter();\\n\\n Series memory _series = series[adapter][maturity];\\n uint256 lscale = lscales[adapter][maturity][usr];\\n\\n // rest of code\\n\\n uint256 tBalNow = uBal.fdivUp(_series.maxscale); // preventive round-up towards the protocol\\n uint256 tBalPrev = uBal.fdiv(lscale);\\n unchecked {\\n collected = tBalPrev > tBalNow ? tBalPrev - tBalNow : 0;\\n }\\n\\n //@audit adapter.target is transferred to AutoRoller\\n ERC20(Adapter(adapter).target()).safeTransferFrom(adapter, usr, collected);\\n Adapter(adapter).notify(usr, collected, false); // Distribute reward tokens\\n\\n // rest of code\\n}\\n```\\n\\nInside divider#combine the collected yield from the YTs are transferred to the AutoRoller. The AutoRoller balance will now contain both the collected yield of the YTs and the target yielded by combining. The end of eject transfers this entire balance to the caller, effectively stealing the yield of the entire AutoRoller. | Combine returns the amount of target yielded by combining the PT and YT. This balance is the amount of assets that should be transferred to the user. | User funds given to the wrong person | ```\\nfunction eject(\\n uint256 shares,\\n address receiver,\\n address owner\\n) public returns (uint256 assets, uint256 excessBal, bool isExcessPTs) {\\n\\n // rest of code\\n\\n //@audit call of interest\\n (excessBal, isExcessPTs) = _exitAndCombine(shares);\\n\\n _burn(owner, shares); // Burn after percent ownership is determined in _exitAndCombine.\\n\\n if (isExcessPTs) {\\n pt.transfer(receiver, excessBal);\\n } else {\\n yt.transfer(receiver, excessBal);\\n }\\n\\n //@audit entire asset (adapter.target) balance transferred to caller, which includes collected YT yield and combined\\n asset.transfer(receiver, assets = asset.balanceOf(address(this)));\\n\\n emit Ejected(msg.sender, receiver, owner, assets, shares,\\n isExcessPTs ? excessBal : 0,\\n isExcessPTs ? 0 : excessBal\\n );\\n}\\n\\nfunction _exitAndCombine(uint256 shares) internal returns (uint256, bool) {\\n uint256 supply = totalSupply; // Save extra SLOAD.\\n\\n uint256 lpBal = shares.mulDivDown(space.balanceOf(address(this)), supply);\\n uint256 totalPTBal = pt.balanceOf(address(this));\\n uint256 ptShare = shares.mulDivDown(totalPTBal, supply);\\n\\n // rest of code\\n\\n uint256 ytBal = shares.mulDivDown(yt.balanceOf(address(this)), supply);\\n ptShare += pt.balanceOf(address(this)) - totalPTBal;\\n\\n unchecked {\\n // Safety: an inequality check is done before subtraction.\\n if (ptShare > ytBal) {\\n\\n //@audit call of interest\\n divider.combine(address(adapter), maturity, ytBal);\\n return (ptShare - ytBal, true);\\n } else { // Set excess PTs to false if the balances are exactly equal.\\n divider.combine(address(adapter), maturity, ptShare);\\n return (ytBal - ptShare, false);\\n }\\n }\\n}\\n```\\n |
Adversary can brick AutoRoller by creating another AutoRoller on the same adapter | high | onSponsorWindowOpened attempts to make a new series at the desired maturity. Each adapter can only have one of each maturity. If the maturity requested already exists then onSponsorWindowOpened will revert, making it impossible to roll the AutoRoller. An adversary can take advantage of this to brick an AutoRoller by creating a second AutoRoller on the same adapter that will create a target maturity before the first AutoRoller. Since the maturity now exists, the first AutoRoller will always revert when trying to Roll.\\n```\\nuint256 _maturity = utils.getFutureMaturity(targetDuration);\\n\\nfunction getFutureMaturity(uint256 monthsForward) public view returns (uint256) {\\n (uint256 year, uint256 month, ) = DateTime.timestampToDate(DateTime.addMonths(block.timestamp, monthsForward));\\n return DateTime.timestampFromDateTime(year, month, 1 /* top of the month */, 0, 0, 0);\\n}\\n```\\n\\nInside AutoRoller#onSponsorWindowOpened the maturity is calculated using RollerUtils#getFutureMaturity. This returns the timestamp the requested months ahead, truncated down to the first of the month. It passes this calculated maturity as the maturity to sponsor a new series.\\n```\\n(ERC20 _pt, YTLike _yt) = periphery.sponsorSeries(address(adapter), _maturity, true);\\n```\\n\\n```\\nfunction sponsorSeries(\\n address adapter,\\n uint256 maturity,\\n bool withPool\\n) external returns (address pt, address yt) {\\n (, address stake, uint256 stakeSize) = Adapter(adapter).getStakeAndTarget();\\n\\n // Transfer stakeSize from sponsor into this contract\\n ERC20(stake).safeTransferFrom(msg.sender, address(this), stakeSize);\\n\\n // Approve divider to withdraw stake assets\\n ERC20(stake).approve(address(divider), stakeSize);\\n\\n (pt, yt) = divider.initSeries(adapter, maturity, msg.sender);\\n\\n // Space pool is always created for verified adapters whilst is optional for unverified ones.\\n // Automatically queueing series is only for verified adapters\\n if (verified[adapter]) {\\n poolManager.queueSeries(adapter, maturity, spaceFactory.create(adapter, maturity));\\n } else {\\n if (withPool) {\\n spaceFactory.create(adapter, maturity);\\n }\\n }\\n emit SeriesSponsored(adapter, maturity, msg.sender);\\n}\\n```\\n\\nperiphery#sponsorSeries is called with true indicating to create a space pool for the newly created series.\\n```\\nfunction create(address adapter, uint256 maturity) external returns (address pool) {\\n address pt = divider.pt(adapter, maturity);\\n _require(pt != address(0), Errors.INVALID_SERIES);\\n _require(pools[adapter][maturity] == address(0), Errors.POOL_ALREADY_EXISTS);\\n\\n pool = address(new Space(\\n vault,\\n adapter,\\n maturity,\\n pt,\\n ts,\\n g1,\\n g2,\\n oracleEnabled\\n ));\\n\\n pools[adapter][maturity] = pool;\\n}\\n```\\n\\nWe run into an issue inside SpaceFactory#create because it only allows a single pool per adapter/maturity. If a pool already exist then it will revert.\\nAn adversary can abuse this revert to brick an existing AutoRoller. Assume AutoRoller A has a duration of 3 months. Its current maturity is December 1st 2022, when rolled it will attempt to create a series at March 1st 2023. An adversary could abuse this and create AutoRoller B with a maturity of 4 months. When they roll for the first time it will create a series with maturity at March 1st 2023. When AutoRoller A attempts to roll it will revert since a series already exists at March 1st 2023.\\nThis conflict can happen accidentally if there is a monthly AutoRoller and a quarterly AutoRoller. It also hinders the viability of using an AutoRoller for an adapter that is popular because the series will likely have been created by the time the autoroller tries to roll into it. | Requiring that the AutoRoller has to create the series seems overly restrictive and leads to a large number of issues. Attempting to join an a series that is already initialized could also lead to pool manipulation rates. It seems like a large refactoring is needed for the rolling section of the AutoRoller | AutoRollers will frequently be bricked | ```\\nuint256 _maturity = utils.getFutureMaturity(targetDuration);\\n\\nfunction getFutureMaturity(uint256 monthsForward) public view returns (uint256) {\\n (uint256 year, uint256 month, ) = DateTime.timestampToDate(DateTime.addMonths(block.timestamp, monthsForward));\\n return DateTime.timestampFromDateTime(year, month, 1 /* top of the month */, 0, 0, 0);\\n}\\n```\\n |
Hardcoded divider address in RollerUtils is incorrect and will brick autoroller | medium | RollerUtils uses a hard-coded constant for the Divider. This address is incorrect and will cause a revert when trying to call AutoRoller#cooldown. If the adapter is combineRestricted then LPs could potentially be unable to withdraw or eject.\\n```\\naddress internal constant DIVIDER = 0x09B10E45A912BcD4E80a8A3119f0cfCcad1e1f12;\\n```\\n\\nRollerUtils uses a hardcoded constant DIVIDER to store the Divider address. There are two issues with this. The most pertinent issue is that the current address used is not the correct mainnet address. The second is that if the divider is upgraded, changing the address of the RollerUtils may be forgotten.\\n```\\n (, uint48 prevIssuance, , , , , uint256 iscale, uint256 mscale, ) = DividerLike(DIVIDER).series(adapter, prevMaturity);\\n```\\n\\nWith an incorrect address the divider#series call will revert causing RollerUtils#getNewTargetedRate to revert, which is called in AutoRoller#cooldown. The result is that the AutoRoller cycle can never be completed. LP will be forced to either withdraw or eject to remove their liquidity. Withdraw only works to a certain point because the AutoRoller tries to keep the target ratio. After which the eject would be the only way for LPs to withdraw. During eject the AutoRoller attempts to combine the PT and YT. If the adapter is also combineRestricted then there is no longer any way for the LPs to withdraw, causing loss of their funds. | RollerUtils DIVIDER should be set by constructor. Additionally RollerUtils should be deployed by the factory constructor to make sure they always have the same immutable divider reference. | Incorrect hard-coded divider address will brick autorollers for all adapters and will cause loss of funds for combineRestricted adapters | ```\\naddress internal constant DIVIDER = 0x09B10E45A912BcD4E80a8A3119f0cfCcad1e1f12;\\n```\\n |
AutoRoller#eject can be used to steal all the yield from vault's YTs | high | AutoRoller#eject collects all the current yield of the YTs, combines the users share of the PTs and YTs then sends the user the entire target balance of the contract. The problem is that combine claims the yield for ALL YTs, which sends the AutoRoller target assets. Since it sends the user the entire target balance of the contract it accidentally sends the user the yield from all the pool's YTs.\\n```\\nfunction eject(\\n uint256 shares,\\n address receiver,\\n address owner\\n) public returns (uint256 assets, uint256 excessBal, bool isExcessPTs) {\\n\\n // rest of code\\n\\n //@audit call of interest\\n (excessBal, isExcessPTs) = _exitAndCombine(shares);\\n\\n _burn(owner, shares); // Burn after percent ownership is determined in _exitAndCombine.\\n\\n if (isExcessPTs) {\\n pt.transfer(receiver, excessBal);\\n } else {\\n yt.transfer(receiver, excessBal);\\n }\\n\\n //@audit entire asset (adapter.target) balance transferred to caller, which includes collected YT yield and combined\\n asset.transfer(receiver, assets = asset.balanceOf(address(this)));\\n\\n emit Ejected(msg.sender, receiver, owner, assets, shares,\\n isExcessPTs ? excessBal : 0,\\n isExcessPTs ? 0 : excessBal\\n );\\n}\\n\\nfunction _exitAndCombine(uint256 shares) internal returns (uint256, bool) {\\n uint256 supply = totalSupply; // Save extra SLOAD.\\n\\n uint256 lpBal = shares.mulDivDown(space.balanceOf(address(this)), supply);\\n uint256 totalPTBal = pt.balanceOf(address(this));\\n uint256 ptShare = shares.mulDivDown(totalPTBal, supply);\\n\\n // rest of code\\n\\n uint256 ytBal = shares.mulDivDown(yt.balanceOf(address(this)), supply);\\n ptShare += pt.balanceOf(address(this)) - totalPTBal;\\n\\n unchecked {\\n // Safety: an inequality check is done before subtraction.\\n if (ptShare > ytBal) {\\n\\n //@audit call of interest\\n divider.combine(address(adapter), maturity, ytBal);\\n return (ptShare - ytBal, true);\\n } else { // Set excess PTs to false if the balances are exactly equal.\\n divider.combine(address(adapter), maturity, ptShare);\\n return (ytBal - ptShare, false);\\n }\\n }\\n}\\n```\\n\\nEject allows the user to leave the liquidity pool by withdrawing their liquidity from the Balancer pool and combining the PTs and YTs via divider.combine.\\n```\\nfunction combine(\\n address adapter,\\n uint256 maturity,\\n uint256 uBal\\n) external nonReentrant whenNotPaused returns (uint256 tBal) {\\n if (!adapterMeta[adapter].enabled) revert Errors.InvalidAdapter();\\n if (!_exists(adapter, maturity)) revert Errors.SeriesDoesNotExist();\\n\\n uint256 level = adapterMeta[adapter].level;\\n if (level.combineRestricted() && msg.sender != adapter) revert Errors.CombineRestricted();\\n\\n // Burn the PT\\n Token(series[adapter][maturity].pt).burn(msg.sender, uBal);\\n\\n //@audit call of interest\\n uint256 collected = _collect(msg.sender, adapter, maturity, uBal, uBal, address(0));\\n\\n // rest of code\\n\\n // Convert from units of Underlying to units of Target\\n tBal = uBal.fdiv(cscale);\\n ERC20(Adapter(adapter).target()).safeTransferFrom(adapter, msg.sender, tBal);\\n\\n // Notify only when Series is not settled as when it is, the _collect() call above would trigger a _redeemYT which will call notify\\n if (!settled) Adapter(adapter).notify(msg.sender, tBal, false);\\n unchecked {\\n // Safety: bounded by the Target's total token supply\\n tBal += collected;\\n }\\n emit Combined(adapter, maturity, tBal, msg.sender);\\n}\\n```\\n\\n```\\nfunction _collect(\\n address usr,\\n address adapter,\\n uint256 maturity,\\n uint256 uBal,\\n uint256 uBalTransfer,\\n address to\\n) internal returns (uint256 collected) {\\n if (!_exists(adapter, maturity)) revert Errors.SeriesDoesNotExist();\\n\\n if (!adapterMeta[adapter].enabled && !_settled(adapter, maturity)) revert Errors.InvalidAdapter();\\n\\n Series memory _series = series[adapter][maturity];\\n uint256 lscale = lscales[adapter][maturity][usr];\\n\\n // rest of code\\n\\n uint256 tBalNow = uBal.fdivUp(_series.maxscale); // preventive round-up towards the protocol\\n uint256 tBalPrev = uBal.fdiv(lscale);\\n unchecked {\\n collected = tBalPrev > tBalNow ? tBalPrev - tBalNow : 0;\\n }\\n\\n //@audit adapter.target is transferred to AutoRoller\\n ERC20(Adapter(adapter).target()).safeTransferFrom(adapter, usr, collected);\\n Adapter(adapter).notify(usr, collected, false); // Distribute reward tokens\\n\\n // rest of code\\n}\\n```\\n\\nInside divider#combine the collected yield from the YTs are transferred to the AutoRoller. The AutoRoller balance will now contain both the collected yield of the YTs and the target yielded by combining. The end of eject transfers this entire balance to the caller, effectively stealing the yield of the entire AutoRoller. | Combine returns the amount of target yielded by combining the PT and YT. This balance is the amount of assets that should be transferred to the user. | User funds given to the wrong person | ```\\nfunction eject(\\n uint256 shares,\\n address receiver,\\n address owner\\n) public returns (uint256 assets, uint256 excessBal, bool isExcessPTs) {\\n\\n // rest of code\\n\\n //@audit call of interest\\n (excessBal, isExcessPTs) = _exitAndCombine(shares);\\n\\n _burn(owner, shares); // Burn after percent ownership is determined in _exitAndCombine.\\n\\n if (isExcessPTs) {\\n pt.transfer(receiver, excessBal);\\n } else {\\n yt.transfer(receiver, excessBal);\\n }\\n\\n //@audit entire asset (adapter.target) balance transferred to caller, which includes collected YT yield and combined\\n asset.transfer(receiver, assets = asset.balanceOf(address(this)));\\n\\n emit Ejected(msg.sender, receiver, owner, assets, shares,\\n isExcessPTs ? excessBal : 0,\\n isExcessPTs ? 0 : excessBal\\n );\\n}\\n\\nfunction _exitAndCombine(uint256 shares) internal returns (uint256, bool) {\\n uint256 supply = totalSupply; // Save extra SLOAD.\\n\\n uint256 lpBal = shares.mulDivDown(space.balanceOf(address(this)), supply);\\n uint256 totalPTBal = pt.balanceOf(address(this));\\n uint256 ptShare = shares.mulDivDown(totalPTBal, supply);\\n\\n // rest of code\\n\\n uint256 ytBal = shares.mulDivDown(yt.balanceOf(address(this)), supply);\\n ptShare += pt.balanceOf(address(this)) - totalPTBal;\\n\\n unchecked {\\n // Safety: an inequality check is done before subtraction.\\n if (ptShare > ytBal) {\\n\\n //@audit call of interest\\n divider.combine(address(adapter), maturity, ytBal);\\n return (ptShare - ytBal, true);\\n } else { // Set excess PTs to false if the balances are exactly equal.\\n divider.combine(address(adapter), maturity, ptShare);\\n return (ytBal - ptShare, false);\\n }\\n }\\n}\\n```\\n |
Adversary can brick AutoRoller by creating another AutoRoller on the same adapter | high | onSponsorWindowOpened attempts to make a new series at the desired maturity. Each adapter can only have one of each maturity. If the maturity requested already exists then onSponsorWindowOpened will revert, making it impossible to roll the AutoRoller. An adversary can take advantage of this to brick an AutoRoller by creating a second AutoRoller on the same adapter that will create a target maturity before the first AutoRoller. Since the maturity now exists, the first AutoRoller will always revert when trying to Roll.\\n```\\nuint256 _maturity = utils.getFutureMaturity(targetDuration);\\n\\nfunction getFutureMaturity(uint256 monthsForward) public view returns (uint256) {\\n (uint256 year, uint256 month, ) = DateTime.timestampToDate(DateTime.addMonths(block.timestamp, monthsForward));\\n return DateTime.timestampFromDateTime(year, month, 1 /* top of the month */, 0, 0, 0);\\n}\\n```\\n\\nInside AutoRoller#onSponsorWindowOpened the maturity is calculated using RollerUtils#getFutureMaturity. This returns the timestamp the requested months ahead, truncated down to the first of the month. It passes this calculated maturity as the maturity to sponsor a new series.\\n```\\n(ERC20 _pt, YTLike _yt) = periphery.sponsorSeries(address(adapter), _maturity, true);\\n```\\n\\n```\\nfunction sponsorSeries(\\n address adapter,\\n uint256 maturity,\\n bool withPool\\n) external returns (address pt, address yt) {\\n (, address stake, uint256 stakeSize) = Adapter(adapter).getStakeAndTarget();\\n\\n // Transfer stakeSize from sponsor into this contract\\n ERC20(stake).safeTransferFrom(msg.sender, address(this), stakeSize);\\n\\n // Approve divider to withdraw stake assets\\n ERC20(stake).approve(address(divider), stakeSize);\\n\\n (pt, yt) = divider.initSeries(adapter, maturity, msg.sender);\\n\\n // Space pool is always created for verified adapters whilst is optional for unverified ones.\\n // Automatically queueing series is only for verified adapters\\n if (verified[adapter]) {\\n poolManager.queueSeries(adapter, maturity, spaceFactory.create(adapter, maturity));\\n } else {\\n if (withPool) {\\n spaceFactory.create(adapter, maturity);\\n }\\n }\\n emit SeriesSponsored(adapter, maturity, msg.sender);\\n}\\n```\\n\\nperiphery#sponsorSeries is called with true indicating to create a space pool for the newly created series.\\n```\\nfunction create(address adapter, uint256 maturity) external returns (address pool) {\\n address pt = divider.pt(adapter, maturity);\\n _require(pt != address(0), Errors.INVALID_SERIES);\\n _require(pools[adapter][maturity] == address(0), Errors.POOL_ALREADY_EXISTS);\\n\\n pool = address(new Space(\\n vault,\\n adapter,\\n maturity,\\n pt,\\n ts,\\n g1,\\n g2,\\n oracleEnabled\\n ));\\n\\n pools[adapter][maturity] = pool;\\n}\\n```\\n\\nWe run into an issue inside SpaceFactory#create because it only allows a single pool per adapter/maturity. If a pool already exist then it will revert.\\nAn adversary can abuse this revert to brick an existing AutoRoller. Assume AutoRoller A has a duration of 3 months. Its current maturity is December 1st 2022, when rolled it will attempt to create a series at March 1st 2023. An adversary could abuse this and create AutoRoller B with a maturity of 4 months. When they roll for the first time it will create a series with maturity at March 1st 2023. When AutoRoller A attempts to roll it will revert since a series already exists at March 1st 2023.\\nThis conflict can happen accidentally if there is a monthly AutoRoller and a quarterly AutoRoller. It also hinders the viability of using an AutoRoller for an adapter that is popular because the series will likely have been created by the time the autoroller tries to roll into it. | Requiring that the AutoRoller has to create the series seems overly restrictive and leads to a large number of issues. Attempting to join an a series that is already initialized could also lead to pool manipulation rates. It seems like a large refactoring is needed for the rolling section of the AutoRoller | AutoRollers will frequently be bricked | ```\\nuint256 _maturity = utils.getFutureMaturity(targetDuration);\\n\\nfunction getFutureMaturity(uint256 monthsForward) public view returns (uint256) {\\n (uint256 year, uint256 month, ) = DateTime.timestampToDate(DateTime.addMonths(block.timestamp, monthsForward));\\n return DateTime.timestampFromDateTime(year, month, 1 /* top of the month */, 0, 0, 0);\\n}\\n```\\n |
Public vault : Initial depositor can manipulate the price per share value and future depositors are forced to deposit huge value in vault. | high | Most of the share based vault implementation will face this issue. The vault is based on the ERC4626 where the shares are calculated based on the deposit value. By depositing large amount as initial deposit, initial depositor can influence the future depositors value.\\nBy depositing large amount as initial deposit, first depositor can take advantage over other depositors.\\nI am sharing reference for this type of issue that already reported and acknowledged. This explain how the share price could be manipulated to large value.\\nERC4626 implementation function mint(uint256 shares, address receiver) public virtual returns (uint256 assets) { assets = previewMint(shares); // No need to check for rounding error, previewMint rounds up.\\n```\\n // Need to transfer before minting or ERC777s could reenter.\\n asset.safeTransferFrom(msg.sender, address(this), assets);\\n\\n _mint(receiver, shares);\\n\\n emit Deposit(msg.sender, receiver, assets, shares);\\n\\n afterDeposit(assets, shares);\\n}\\n\\n function previewMint(uint256 shares) public view virtual returns (uint256) {\\n uint256 supply = totalSupply; // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? shares : shares.mulDivUp(totalAssets(), supply);\\n}\\n```\\n | Consider requiring a minimal amount of share tokens to be minted for the first minter, and send a portion of the initial mints as a reserve to the DAO/ burn so that the price per share can be more resistant to manipulation. | Future depositors are forced for huge value of asset to deposit. It is not practically possible for all the users. This could directly affect on the attrition of users towards this system. | ```\\n // Need to transfer before minting or ERC777s could reenter.\\n asset.safeTransferFrom(msg.sender, address(this), assets);\\n\\n _mint(receiver, shares);\\n\\n emit Deposit(msg.sender, receiver, assets, shares);\\n\\n afterDeposit(assets, shares);\\n}\\n\\n function previewMint(uint256 shares) public view virtual returns (uint256) {\\n uint256 supply = totalSupply; // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? shares : shares.mulDivUp(totalAssets(), supply);\\n}\\n```\\n |
Math rounding in AutoRoller.sol is not ERC4626-complicant: previewWithdraw should round up. | medium | Math rounding in AutoRoller.sol is not ERC4626-complicant: previewWithdraw should round up.\\nFinally, ERC-4626 Vault implementers should be aware of the need for specific, opposing rounding directions across the different mutable and view methods, as it is considered most secure to favor the Vault itself during calculations over its users:\\nIf (1) it's calculating how many shares to issue to a user for a certain amount of the underlying tokens they provide or (2) it's determining the amount of the underlying tokens to transfer to them for returning a certain amount of shares, it should round down. If (1) it's calculating the amount of shares a user has to supply to receive a given amount of the underlying tokens or (2) it's calculating the amount of underlying tokens a user has to provide to receive a certain amount of shares, it should round up.\\nThen previewWithdraw in AutoRoller.sol should round up.\\nThe original implementation for previewWithdraw in Solmate ERC4626 is:\\n```\\n function previewWithdraw(uint256 assets) public view virtual returns (uint256) {\\n uint256 supply = totalSupply; // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? assets : assets.mulDivUp(supply, totalAssets());\\n }\\n```\\n\\nIt is rounding up, however in the implementation of the AutoRoller.sol#previewWith is not round up.\\n```\\nfor (uint256 i = 0; i < 20;) { // 20 chosen as a safe bound for convergence from practical trials.\\n if (guess > supply) {\\n guess = supply;\\n }\\n\\n int256 answer = previewRedeem(guess.safeCastToUint()).safeCastToInt() - assets.safeCastToInt();\\n\\n if (answer >= 0 && answer <= assets.mulWadDown(0.001e18).safeCastToInt() || (prevAnswer == answer)) { // Err on the side of overestimating shares needed. Could reduce precision for gas efficiency.\\n break;\\n }\\n\\n if (guess == supply && answer < 0) revert InsufficientLiquidity();\\n\\n int256 nextGuess = guess - (answer * (guess - prevGuess) / (answer - prevAnswer));\\n prevGuess = guess;\\n prevAnswer = answer;\\n guess = nextGuess;\\n\\n unchecked { ++i; }\\n}\\n\\nreturn guess.safeCastToUint() + maxError; // Buffer for pow discrepancies.\\n```\\n\\nnote the line:\\n```\\n int256 answer = previewRedeem(guess.safeCastToUint()).safeCastToInt() - assets.safeCastToInt();\\n```\\n\\npreviewRedeem is round down.\\nand later we update guess and return guess\\n```\\n int256 nextGuess = guess - (answer * (guess - prevGuess) / (answer - prevAnswer));\\n prevGuess = guess;\\n prevAnswer = answer;\\n guess = nextGuess;\\n```\\n\\nand\\n```\\n return guess.safeCastToUint() + maxError; // Buffer for pow discrepancies.\\n```\\n\\nwhen calculating the the nextGuess, the code does not round up.\\n```\\nint256 nextGuess = guess - (answer * (guess - prevGuess) / (answer - prevAnswer));\\n```\\n | Round up in previewWithdraw using mulDivUp and divWadUp | Other protocols that integrate with Sense finance AutoRoller.sol might wrongly assume that the functions handle rounding as per ERC4626 expectation. Thus, it might cause some intergration problem in the future that can lead to wide range of issues for both parties. | ```\\n function previewWithdraw(uint256 assets) public view virtual returns (uint256) {\\n uint256 supply = totalSupply; // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? assets : assets.mulDivUp(supply, totalAssets());\\n }\\n```\\n |
Funding Rate calculation is not correct | medium | According to the docs, the Funding Rate is intended to correspond to the gap between long and short positions that the Float Pool is required to make up. However, as its implemented, the `totalFunding` is calculated only on the size of the overbalanced position, leading to some unexpected situations.\\nAccording to the comments, `totalFunding` is meant to be calculated as follows:\\ntotalFunding is calculated on the notional of between long and short liquidity and 2x long and short liquidity.\\nThis makes sense. The purpose of the funding rate is to compensate the Float Pool for the liquidity provided to balance the market.\\nHowever, the implementation of this function does not accomplish this. Instead, `totalFunding` is based only on the size of the overbalancedValue:\\n```\\nuint256 totalFunding = (2 * overbalancedValue * fundingRateMultiplier * oracleManager.EPOCH_LENGTH()) / (365.25 days * 10000);\\n```\\n\\nThis can be summarized as `2 * overbalancedValue * funding rate percentage * epochs / yr`.\\nThis formula can cause problems, because the size of the overbalanced value doesn't necessarily correspond to the balancing required for the Float Pool.\\nFor these examples, let's set:\\n`fundingRateMultiplier = 100` (1%)\\n`EPOCH_LENGTH() = 3.6525 days` (1% of a year)\\nSITUATION A:\\nOverbalanced: LONG\\nLong Effective Liquidity: 1_000_000 ether\\nShort Effective Liquidity: 999_999 ether\\n`totalFunding = 2 * 1_000_000 ether * 1% * 1% = 200 ether`\\nAmount of balancing supplied by Float = 1mm - 999,999 = 1 ether\\nSITUATION B:\\nOverbalanced: LONG\\nLong Effective Liquidity: 1_000 ether\\nShort Effective Liquidity: 100 ether\\n`totalFunding = 2 * 1_000 ether * 1% * 1% = 0.2 ether`\\nAmount of balancing supplied by Float = 1000 - 100 = 900 ether\\nWe can see that in Situation B, Float supplied 900X more liquidity to the system, and earned 1000X less fees. | Adjust the `totalFunding` formula to represent the stated outcome. A simple example of how that might be accomplished is below, but I'm sure there are better implementations:\\n```\\nuint256 totalFunding = ((overbalancedValue - underbalancedValue) * fundingRateMultiplier * oracle.EPOCH_LENGTH()) / (365.25 days * 10_000);\\n```\\n | Funding Rates will not accomplish the stated objective, and will serve to incentivize pools that rely heavily on Float for balancing, while disincentivizing large, balanced markets. | ```\\nuint256 totalFunding = (2 * overbalancedValue * fundingRateMultiplier * oracleManager.EPOCH_LENGTH()) / (365.25 days * 10000);\\n```\\n |
Hardcoded divider address in RollerUtils is incorrect and will brick autoroller | medium | RollerUtils uses a hard-coded constant for the Divider. This address is incorrect and will cause a revert when trying to call AutoRoller#cooldown. If the adapter is combineRestricted then LPs could potentially be unable to withdraw or eject.\\n```\\naddress internal constant DIVIDER = 0x09B10E45A912BcD4E80a8A3119f0cfCcad1e1f12;\\n```\\n\\nRollerUtils uses a hardcoded constant DIVIDER to store the Divider address. There are two issues with this. The most pertinent issue is that the current address used is not the correct mainnet address. The second is that if the divider is upgraded, changing the address of the RollerUtils may be forgotten.\\n```\\n (, uint48 prevIssuance, , , , , uint256 iscale, uint256 mscale, ) = DividerLike(DIVIDER).series(adapter, prevMaturity);\\n```\\n\\nWith an incorrect address the divider#series call will revert causing RollerUtils#getNewTargetedRate to revert, which is called in AutoRoller#cooldown. The result is that the AutoRoller cycle can never be completed. LP will be forced to either withdraw or eject to remove their liquidity. Withdraw only works to a certain point because the AutoRoller tries to keep the target ratio. After which the eject would be the only way for LPs to withdraw. During eject the AutoRoller attempts to combine the PT and YT. If the adapter is also combineRestricted then there is no longer any way for the LPs to withdraw, causing loss of their funds. | RollerUtils DIVIDER should be set by constructor. Additionally RollerUtils should be deployed by the factory constructor to make sure they always have the same immutable divider reference. | Incorrect hard-coded divider address will brick autorollers for all adapters and will cause loss of funds for combineRestricted adapters | ```\\naddress internal constant DIVIDER = 0x09B10E45A912BcD4E80a8A3119f0cfCcad1e1f12;\\n```\\n |
AutoRoller.sol#roll can revert if lastSettle is zero because solmate ERC4626 deposit revert if previewDeposit returns 0 | medium | AutoRoller.sol#roll can revert if lastSettle is zero because solmate ERC4626 deposit revert if previewDeposit returns 0\\nlet us look into the implementation of function roll()\\n```\\n /// @notice Roll into the next Series if there isn't an active series and the cooldown period has elapsed.\\n function roll() external {\\n if (maturity != MATURITY_NOT_SET) revert RollWindowNotOpen();\\n\\n if (lastSettle == 0) {\\n // If this is the first roll, lock some shares in by minting them for the zero address.\\n // This prevents the contract from reaching an empty state during future active periods.\\n deposit(firstDeposit, address(0));\\n } else if (lastSettle + cooldown > block.timestamp) {\\n revert RollWindowNotOpen();\\n }\\n\\n lastRoller = msg.sender;\\n adapter.openSponsorWindow();\\n }\\n```\\n\\nnote, if lastSettle is 0, we deposit a small amount of token and mint shares to address(0)\\n```\\ndeposit(firstDeposit, address(0));\\n```\\n\\nFirst deposit is a fairly small amount:\\n```\\nfirstDeposit = (0.01e18 - 1) / scalingFactor + 1;\\n```\\n\\nWe can deposit from ERC4626 implementation:\\n```\\nfunction deposit(uint256 assets, address receiver) public virtual returns (uint256 shares) {\\n // Check for rounding error since we round down in previewDeposit.\\n require((shares = previewDeposit(assets)) != 0, "ZERO_SHARES");\\n\\n // Need to transfer before minting or ERC777s could reenter.\\n asset.safeTransferFrom(msg.sender, address(this), assets);\\n\\n _mint(receiver, shares);\\n\\n emit Deposit(msg.sender, receiver, assets, shares);\\n\\n afterDeposit(assets, shares);\\n}\\n```\\n\\nnote the restriction:\\n```\\n// Check for rounding error since we round down in previewDeposit.\\nrequire((shares = previewDeposit(assets)) != 0, "ZERO_SHARES");\\n\\n// Need to transfer before minting or ERC777s could reenter.\\nasset.safeTransferFrom(msg.sender, address(this), assets);\\n```\\n\\nif previewDeposit returns 0 shares, transaction revert. Can previewDeposit returns 0 shares? it is very possible.\\n```\\nfunction previewDeposit(uint256 assets) public view override returns (uint256) {\\n if (maturity == MATURITY_NOT_SET) {\\n return super.previewDeposit(assets);\\n } else {\\n Space _space = space;\\n (uint256 ptReserves, uint256 targetReserves) = _getSpaceReserves();\\n\\n // Calculate how much Target we'll end up joining the pool with, and use that to preview minted LP shares.\\n uint256 previewedLPBal = (assets - _getTargetForIssuance(ptReserves, targetReserves, assets, adapter.scaleStored()))\\n .mulDivDown(_space.adjustedTotalSupply(), targetReserves);\\n\\n // Shares represent proportional ownership of LP shares the vault holds.\\n return previewedLPBal.mulDivDown(totalSupply, _space.balanceOf(address(this)));\\n }\\n}\\n```\\n\\nIf (previewedLPBal * total) / space balance is truncated to 0, transaction revert. _space.balanceOf can certainly be inflated if malicious actor send the space token to the address manually. Or previewedLPBal * total could just be small and the division is truncated to 0. | We recommend the project not deposit a such small amount, or there could be a function that let admin gradually control how many tokens should we put in the first deposit. | calling roll would revert and the new sponsored series cannot be started properly. | ```\\n /// @notice Roll into the next Series if there isn't an active series and the cooldown period has elapsed.\\n function roll() external {\\n if (maturity != MATURITY_NOT_SET) revert RollWindowNotOpen();\\n\\n if (lastSettle == 0) {\\n // If this is the first roll, lock some shares in by minting them for the zero address.\\n // This prevents the contract from reaching an empty state during future active periods.\\n deposit(firstDeposit, address(0));\\n } else if (lastSettle + cooldown > block.timestamp) {\\n revert RollWindowNotOpen();\\n }\\n\\n lastRoller = msg.sender;\\n adapter.openSponsorWindow();\\n }\\n```\\n |
AutoRoller.sol#roll can revert if lastSettle is zero because solmate ERC4626 deposit revert if previewDeposit returns 0 | medium | AutoRoller.sol#roll can revert if lastSettle is zero because solmate ERC4626 deposit revert if previewDeposit returns 0\\nlet us look into the implementation of function roll()\\n```\\n /// @notice Roll into the next Series if there isn't an active series and the cooldown period has elapsed.\\n function roll() external {\\n if (maturity != MATURITY_NOT_SET) revert RollWindowNotOpen();\\n\\n if (lastSettle == 0) {\\n // If this is the first roll, lock some shares in by minting them for the zero address.\\n // This prevents the contract from reaching an empty state during future active periods.\\n deposit(firstDeposit, address(0));\\n } else if (lastSettle + cooldown > block.timestamp) {\\n revert RollWindowNotOpen();\\n }\\n\\n lastRoller = msg.sender;\\n adapter.openSponsorWindow();\\n }\\n```\\n\\nnote, if lastSettle is 0, we deposit a small amount of token and mint shares to address(0)\\n```\\ndeposit(firstDeposit, address(0));\\n```\\n\\nFirst deposit is a fairly small amount:\\n```\\nfirstDeposit = (0.01e18 - 1) / scalingFactor + 1;\\n```\\n\\nWe can deposit from ERC4626 implementation:\\n```\\nfunction deposit(uint256 assets, address receiver) public virtual returns (uint256 shares) {\\n // Check for rounding error since we round down in previewDeposit.\\n require((shares = previewDeposit(assets)) != 0, "ZERO_SHARES");\\n\\n // Need to transfer before minting or ERC777s could reenter.\\n asset.safeTransferFrom(msg.sender, address(this), assets);\\n\\n _mint(receiver, shares);\\n\\n emit Deposit(msg.sender, receiver, assets, shares);\\n\\n afterDeposit(assets, shares);\\n}\\n```\\n\\nnote the restriction:\\n```\\n// Check for rounding error since we round down in previewDeposit.\\nrequire((shares = previewDeposit(assets)) != 0, "ZERO_SHARES");\\n\\n// Need to transfer before minting or ERC777s could reenter.\\nasset.safeTransferFrom(msg.sender, address(this), assets);\\n```\\n\\nif previewDeposit returns 0 shares, transaction revert. Can previewDeposit returns 0 shares? it is very possible.\\n```\\nfunction previewDeposit(uint256 assets) public view override returns (uint256) {\\n if (maturity == MATURITY_NOT_SET) {\\n return super.previewDeposit(assets);\\n } else {\\n Space _space = space;\\n (uint256 ptReserves, uint256 targetReserves) = _getSpaceReserves();\\n\\n // Calculate how much Target we'll end up joining the pool with, and use that to preview minted LP shares.\\n uint256 previewedLPBal = (assets - _getTargetForIssuance(ptReserves, targetReserves, assets, adapter.scaleStored()))\\n .mulDivDown(_space.adjustedTotalSupply(), targetReserves);\\n\\n // Shares represent proportional ownership of LP shares the vault holds.\\n return previewedLPBal.mulDivDown(totalSupply, _space.balanceOf(address(this)));\\n }\\n}\\n```\\n\\nIf (previewedLPBal * total) / space balance is truncated to 0, transaction revert. _space.balanceOf can certainly be inflated if malicious actor send the space token to the address manually. Or previewedLPBal * total could just be small and the division is truncated to 0. | We recommend the project not deposit a such small amount, or there could be a function that let admin gradually control how many tokens should we put in the first deposit. | calling roll would revert and the new sponsored series cannot be started properly. | ```\\n /// @notice Roll into the next Series if there isn't an active series and the cooldown period has elapsed.\\n function roll() external {\\n if (maturity != MATURITY_NOT_SET) revert RollWindowNotOpen();\\n\\n if (lastSettle == 0) {\\n // If this is the first roll, lock some shares in by minting them for the zero address.\\n // This prevents the contract from reaching an empty state during future active periods.\\n deposit(firstDeposit, address(0));\\n } else if (lastSettle + cooldown > block.timestamp) {\\n revert RollWindowNotOpen();\\n }\\n\\n lastRoller = msg.sender;\\n adapter.openSponsorWindow();\\n }\\n```\\n |
Math rounding in AutoRoller.sol is not ERC4626-complicant: previewWithdraw should round up. | medium | Math rounding in AutoRoller.sol is not ERC4626-complicant: previewWithdraw should round up.\\nFinally, ERC-4626 Vault implementers should be aware of the need for specific, opposing rounding directions across the different mutable and view methods, as it is considered most secure to favor the Vault itself during calculations over its users:\\nIf (1) it's calculating how many shares to issue to a user for a certain amount of the underlying tokens they provide or (2) it's determining the amount of the underlying tokens to transfer to them for returning a certain amount of shares, it should round down. If (1) it's calculating the amount of shares a user has to supply to receive a given amount of the underlying tokens or (2) it's calculating the amount of underlying tokens a user has to provide to receive a certain amount of shares, it should round up.\\nThen previewWithdraw in AutoRoller.sol should round up.\\nThe original implementation for previewWithdraw in Solmate ERC4626 is:\\n```\\n function previewWithdraw(uint256 assets) public view virtual returns (uint256) {\\n uint256 supply = totalSupply; // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? assets : assets.mulDivUp(supply, totalAssets());\\n }\\n```\\n\\nIt is rounding up, however in the implementation of the AutoRoller.sol#previewWith is not round up.\\n```\\nfor (uint256 i = 0; i < 20;) { // 20 chosen as a safe bound for convergence from practical trials.\\n if (guess > supply) {\\n guess = supply;\\n }\\n\\n int256 answer = previewRedeem(guess.safeCastToUint()).safeCastToInt() - assets.safeCastToInt();\\n\\n if (answer >= 0 && answer <= assets.mulWadDown(0.001e18).safeCastToInt() || (prevAnswer == answer)) { // Err on the side of overestimating shares needed. Could reduce precision for gas efficiency.\\n break;\\n }\\n\\n if (guess == supply && answer < 0) revert InsufficientLiquidity();\\n\\n int256 nextGuess = guess - (answer * (guess - prevGuess) / (answer - prevAnswer));\\n prevGuess = guess;\\n prevAnswer = answer;\\n guess = nextGuess;\\n\\n unchecked { ++i; }\\n}\\n\\nreturn guess.safeCastToUint() + maxError; // Buffer for pow discrepancies.\\n```\\n\\nnote the line:\\n```\\n int256 answer = previewRedeem(guess.safeCastToUint()).safeCastToInt() - assets.safeCastToInt();\\n```\\n\\npreviewRedeem is round down.\\nand later we update guess and return guess\\n```\\n int256 nextGuess = guess - (answer * (guess - prevGuess) / (answer - prevAnswer));\\n prevGuess = guess;\\n prevAnswer = answer;\\n guess = nextGuess;\\n```\\n\\nand\\n```\\n return guess.safeCastToUint() + maxError; // Buffer for pow discrepancies.\\n```\\n\\nwhen calculating the the nextGuess, the code does not round up.\\n```\\nint256 nextGuess = guess - (answer * (guess - prevGuess) / (answer - prevAnswer));\\n```\\n | Round up in previewWithdraw using mulDivUp and divWadUp | Other protocols that integrate with Sense finance AutoRoller.sol might wrongly assume that the functions handle rounding as per ERC4626 expectation. Thus, it might cause some intergration problem in the future that can lead to wide range of issues for both parties. | ```\\n function previewWithdraw(uint256 assets) public view virtual returns (uint256) {\\n uint256 supply = totalSupply; // Saves an extra SLOAD if totalSupply is non-zero.\\n\\n return supply == 0 ? assets : assets.mulDivUp(supply, totalAssets());\\n }\\n```\\n |
Lender#lend for Sense has mismatched decimals | high | The decimals of the Sense principal token don't match the decimals of the ERC5095 vault it mints shares to. This can be abused on the USDC market to mint a large number of shares to steal yield from all other users.\\n```\\n uint256 received;\\n {\\n // Get the starting balance of the principal token\\n uint256 starting = token.balanceOf(address(this));\\n\\n // Swap those tokens for the principal tokens\\n ISensePeriphery(x).swapUnderlyingForPTs(adapter, s, lent, r);\\n\\n // Calculate number of principal tokens received in the swap\\n received = token.balanceOf(address(this)) - starting;\\n\\n // Verify that we received the principal tokens\\n if (received < r) {\\n revert Exception(11, 0, 0, address(0), address(0));\\n }\\n }\\n\\n // Mint the Illuminate tokens based on the returned amount\\n IERC5095(principalToken(u, m)).authMint(msg.sender, received);\\n```\\n\\nSense principal tokens for DIA and USDC are 8 decimals to match the decimals of the underlying cTokens, cUSDC and cDAI. The decimals of the ERC5095 vault matches the underlying of the vault. This creates a disparity in decimals that aren't adjusted for in Lender#lend for Sense, which assumes that the vault and Sense principal tokens match in decimals. In the example of USDC the ERC5095 will be 6 decimals but the sense token will be 8 decimals. Each 1e6 USDC token will result in ~1e8 Sense tokens being received. Since the contract mints based on the difference in the number of sense tokens before and after the call, it will mint ~100x the number of vault shares than it should. Since the final yield is distributed pro-rata to the number of shares, the user who minted with sense will be entitled to much more yield than they should be and everyone else will get substantially less. | Issue Lender#lend for Sense has mismatched decimals\\nQuery the decimals of the Sense principal and use that to adjust the decimals to match the decimals of the vault. | User can mint large number of shares to steal funds from other users | ```\\n uint256 received;\\n {\\n // Get the starting balance of the principal token\\n uint256 starting = token.balanceOf(address(this));\\n\\n // Swap those tokens for the principal tokens\\n ISensePeriphery(x).swapUnderlyingForPTs(adapter, s, lent, r);\\n\\n // Calculate number of principal tokens received in the swap\\n received = token.balanceOf(address(this)) - starting;\\n\\n // Verify that we received the principal tokens\\n if (received < r) {\\n revert Exception(11, 0, 0, address(0), address(0));\\n }\\n }\\n\\n // Mint the Illuminate tokens based on the returned amount\\n IERC5095(principalToken(u, m)).authMint(msg.sender, received);\\n```\\n |
Lend or mint after maturity | high | The protocol does not forbid lending or minting after the maturity leaving the possibility to profit from early users.\\nLet's take the mint function as an example:\\n```\\n function mint(\\n uint8 p,\\n address u,\\n uint256 m,\\n uint256 a\\n ) external unpaused(u, m, p) returns (bool) {\\n // Fetch the desired principal token\\n address principal = IMarketPlace(marketPlace).token(u, m, p);\\n\\n // Transfer the users principal tokens to the lender contract\\n Safe.transferFrom(IERC20(principal), msg.sender, address(this), a);\\n\\n // Mint the tokens received from the user\\n IERC5095(principalToken(u, m)).authMint(msg.sender, a);\\n\\n emit Mint(p, u, m, a);\\n\\n return true;\\n }\\n```\\n\\nIt is a simple function that accepts the principal token and mints the corresponding ERC5095 tokens in return. There are no restrictions on timing, the user can mint even after the maturity. Malicious actors can take this as an advantage to pump their bags on behalf of legitimate early users.\\nScenario:\\nLegitimate users lend and mint their ERC5095 tokens before maturity.\\nWhen the maturity kicks in, lender tokens are redeemed and holdings are updated.\\nLegitimate users try to redeem their ERC5095 for the underlying tokens. The formula is `(amount * holdings[u][m]) / token.totalSupply();`\\nA malicious actor sandwiches legitimate users, and mints the ERC5095 thus increasing the totalSupply and reducing other user shares. Then redeem principals again and burn their own shares for increased rewards.\\nExample with concrete values:\\nuserA deposits `100` tokens, user B deposits `200` tokens. The total supply minted is `300` ERC5095 tokens.\\nAfter the maturity the redemption happens and now let's say `holdings[u][m]` is `330` (+30).\\nuserA tries to redeem the underlying. The expected amount is: `100` * `330` / `300` = 110. However, this action is frontrunned by userC (malicious) who mints yet another `500` tokens post-maturity. The total supply becomes `800`. The real value userA now receives is: 110 * `330` / `800` = 45.375.\\nAfter that the malicious actor userC invokes the redemption again, and the `holdings[u][m]` is now `330` - 45.375 + `550` = 834.625.\\nuserC redeems the underlying: `500` * 834.625 / 700 ~= 596.16 (expected was 550).\\nNow all the remaining users will also slightly benefit, e.g. in this case userB redeems what's left: `200` * 238.46 / `200` = 238.46 (expected was 220). | Issue Lend or mint after maturity\\nLend/mint should be forbidden post-maturity. | The amount legitimate users receive will be devaluated, while malicious actor can increase their ROI without meaningfully contributing to the protocol and locking their tokens. | ```\\n function mint(\\n uint8 p,\\n address u,\\n uint256 m,\\n uint256 a\\n ) external unpaused(u, m, p) returns (bool) {\\n // Fetch the desired principal token\\n address principal = IMarketPlace(marketPlace).token(u, m, p);\\n\\n // Transfer the users principal tokens to the lender contract\\n Safe.transferFrom(IERC20(principal), msg.sender, address(this), a);\\n\\n // Mint the tokens received from the user\\n IERC5095(principalToken(u, m)).authMint(msg.sender, a);\\n\\n emit Mint(p, u, m, a);\\n\\n return true;\\n }\\n```\\n |
Incorrect parameters | medium | Some functions and integrations receive the wrong parameters.\\nHere, this does not work:\\n```\\n } else if (p == uint8(Principals.Notional)) {\\n // Principal token must be approved for Notional's lend\\n ILender(lender).approve(address(0), address(0), address(0), a);\\n```\\n\\nbecause it basically translates to:\\n```\\n } else if (p == uint8(Principals.Notional)) {\\n if (a != address(0)) {\\n Safe.approve(IERC20(address(0)), a, type(uint256).max);\\n }\\n```\\n\\nIt tries to approve a non-existing token. It should approve the underlying token and Notional's token contract.\\nAnother issue is with Tempus here:\\n```\\n // Swap on the Tempus Router using the provided market and params\\n ITempus(controller).depositAndFix(x, lent, true, r, d);\\n\\n // Calculate the amount of Tempus principal tokens received after the deposit\\n uint256 received = IERC20(principal).balanceOf(address(this)) - start;\\n\\n // Verify that a minimum number of principal tokens were received\\n if (received < r) {\\n revert Exception(11, received, r, address(0), address(0));\\n }\\n```\\n\\nIt passes `r` as a slippage parameter and later checks that received >= `r`. However, in Tempus this parameter is not exactly the minimum amount to receive, it is the ratio which is calculated as follows:\\n```\\n /// @param minTYSRate Minimum exchange rate of TYS (denominated in TPS) to receive in exchange for TPS\\n function depositAndFix(\\n ITempusAMM tempusAMM,\\n uint256 tokenAmount,\\n bool isBackingToken,\\n uint256 minTYSRate,\\n uint256 deadline\\n ) external payable nonReentrant {\\n// rest of code\\n uint256 minReturn = swapAmount.mulfV(minTYSRate, targetPool.backingTokenONE());\\n```\\n | Review all the integrations and function invocations, and make sure the appropriate parameters are passed. | Inaccurate parameter values may lead to protocol misfunction down the road, e.g. insufficient approval or unpredicted slippage. | ```\\n } else if (p == uint8(Principals.Notional)) {\\n // Principal token must be approved for Notional's lend\\n ILender(lender).approve(address(0), address(0), address(0), a);\\n```\\n |
Sense PT redemptions do not allow for known loss scenarios | medium | Sense PT redemptions do not allow for known loss scenarios, which will lead to principal losses\\nThe Sense PT redemption code in the `Redeemer` expects any losses during redemption to be due to a malicious adapter, and requires that there be no losses. However, there are legitimate reasons for there to be losses which aren't accounted for, which will cause the PTs to be unredeemable. The Lido FAQ page lists two such reasons:\\n```\\n- Slashing risk\\n\\nETH 2.0 validators risk staking penalties, with up to 100% of staked funds at risk if validators fail. To minimise this risk, Lido stakes across multiple professional and reputable node operators with heterogeneous setups, with additional mitigation in the form of insurance that is paid from Lido fees.\\n\\n- stETH price risk\\n\\nUsers risk an exchange price of stETH which is lower than inherent value due to withdrawal restrictions on Lido, making arbitrage and risk-free market-making impossible. \\n\\nThe Lido DAO is driven to mitigate above risks and eliminate them entirely to the extent possible. Despite this, they may still exist and, as such, it is our duty to communicate them.\\n```\\n\\nIf Lido is slashed, or there are withdrawal restrictions, the Sense series sponsor will be forced to settle the series, regardless of the exchange rate (or miss out on their rewards). The Sense `Divider` contract anticipates and properly handles these losses, but the Illuminate code does not.\\nLido is just one example of a Sense token that exists in the Illuminate code base - there may be others added in the future which also require there to be allowances for losses. | Allow losses during redemption if Sense's `Periphery.verified()` returns `true` | Permanent freezing of funds\\nThere may be a malicious series sponsor that purposely triggers a loss, either by DOSing Lido validators, or by withdrawing enough to trigger withdrawal restrictions. In such a case, the exchange rate stored by Sense during the settlement will lead to losses, and users that hold Illumimate PTs (not just the users that minted Illuminate PTs with Sense PTs), will lose their principal, because Illuminate PT redemptions are an a share-of-underlying basis, not on the basis of the originally-provided token.\\nWhile the Illuminate project does have an emergency `withdraw()` function that would allow an admin to rescue the funds and manually distribute them, this would not be trustless and defeats the purpose of having a smart contract. | ```\\n- Slashing risk\\n\\nETH 2.0 validators risk staking penalties, with up to 100% of staked funds at risk if validators fail. To minimise this risk, Lido stakes across multiple professional and reputable node operators with heterogeneous setups, with additional mitigation in the form of insurance that is paid from Lido fees.\\n\\n- stETH price risk\\n\\nUsers risk an exchange price of stETH which is lower than inherent value due to withdrawal restrictions on Lido, making arbitrage and risk-free market-making impossible. \\n\\nThe Lido DAO is driven to mitigate above risks and eliminate them entirely to the extent possible. Despite this, they may still exist and, as such, it is our duty to communicate them.\\n```\\n |
Notional PT redemptions do not use flash-resistant prices | medium | Notional PT redemptions do not use the correct function for determining balances, which will lead to principal losses\\nEIP-4626 states the following about maxRedeem():\\n```\\nMUST return the maximum amount of shares that could be transferred from `owner` through `redeem` and not cause a revert, which MUST NOT be higher than the actual maximum that would be accepted (it should underestimate if necessary).\\n\\nMUST factor in both global and user-specific limits, like if redemption is entirely disabled (even temporarily) it MUST return 0.\\n```\\n\\nThe above means that the implementer is free to return less than the actual balance, and is in fact required to return zero if the token's backing store is paused, and Notional's can be paused. While neither of these conditions currently apply to the existing wfCashERC4626 implementation, there is nothing stopping Notional from implementing the MUST-return-zero-if-paused fix tomorrow, or from changing their implementation to one that requires `maxRedeem()` to return something other than the current balance. | Use `balanceOf()` rather than `maxRedeem()` in the call to `INotional.redeem()`, and make sure that Illuminate PTs can't be burned if `Lender` still has Notional PTs that it needs to redeem (based on its own accounting of what is remaining, not based on balance checks, so that it can't be griefed with dust). | Permanent freezing of funds\\nIf `maxRedeem()` were to return zero, or some other non-exact value, fewer Notional PTs would be redeemed than are available, and users that redeem()ed their shares, would receive fewer underlying (principal if they minted Illuminate PTs with Notional PTs, e.g. to be an LP in the pool) than they are owed. The Notional PTs that weren't redeemed would still be available for a subsequent call, but if a user already redeemed their Illuminate PTs, their loss will already be locked in, since their Illuminate PTs will have been burned. This would affect ALL Illuminate PT holders of a specific market, not just the ones that provided the Notional PTs, because Illuminate PT redemptions are an a share-of-underlying basis, not on the basis of the originally-provided token. Markets that are already live with Notional set cannot be protected via a redemption pause by the Illuminate admin, because redemption of Lender's external PTs for underlying does not use the `unpaused` modifier, and does have any access control. | ```\\nMUST return the maximum amount of shares that could be transferred from `owner` through `redeem` and not cause a revert, which MUST NOT be higher than the actual maximum that would be accepted (it should underestimate if necessary).\\n\\nMUST factor in both global and user-specific limits, like if redemption is entirely disabled (even temporarily) it MUST return 0.\\n```\\n |
Marketplace.setPrincipal do not approve needed allowance for Element vault and APWine router | medium | `Marketplace.setPrincipal` do not approve needed allowance for `Element vault` and `APWine router`\\n`Marketplace.setPrincipal` is used to provide principal token for the base token and maturity when it was not set yet. To set PT you also provide protocol that this token belongs to.\\nIn case of `APWine` protocol there is special block of code to handle all needed allowance. But it is not enough.\\n```\\n } else if (p == uint8(Principals.Apwine)) {\\n address futureVault = IAPWineToken(a).futureVault();\\n address interestBearingToken = IAPWineFutureVault(futureVault)\\n .getIBTAddress();\\n IRedeemer(redeemer).approve(interestBearingToken);\\n } else if (p == uint8(Principals.Notional)) {\\n```\\n\\nBut in `setPrincipal` we don't have such params and allowance is not set. So `Lender` will not be able to work with that tokens correctly. | Add 2 more params as in `createMarket` and call `ILender(lender).approve(u, e, a, address(0));` | Lender will not provide needed allowance and protocol integration will fail. | ```\\n } else if (p == uint8(Principals.Apwine)) {\\n address futureVault = IAPWineToken(a).futureVault();\\n address interestBearingToken = IAPWineFutureVault(futureVault)\\n .getIBTAddress();\\n IRedeemer(redeemer).approve(interestBearingToken);\\n } else if (p == uint8(Principals.Notional)) {\\n```\\n |
ERC5095.mint function calculates slippage incorrectly | medium | ERC5095.mint function calculates slippage incorrectly. This leads to lost of funds for user.\\n`ERC5095.mint` function should take amount of shares that user wants to receive and then buy this amount. It uses hardcoded 1% slippage when trades base tokens for principal. But it takes 1% of calculated assets amount, not shares.\\n```\\n function mint(address r, uint256 s) external override returns (uint256) {\\n if (block.timestamp > maturity) {\\n revert Exception(\\n 21,\\n block.timestamp,\\n maturity,\\n address(0),\\n address(0)\\n );\\n }\\n uint128 assets = Cast.u128(previewMint(s));\\n Safe.transferFrom(\\n IERC20(underlying),\\n msg.sender,\\n address(this),\\n assets\\n );\\n // consider the hardcoded slippage limit, 4626 compliance requires no minimum param.\\n uint128 returned = IMarketPlace(marketplace).sellUnderlying(\\n underlying,\\n maturity,\\n assets,\\n assets - (assets / 100)\\n );\\n _transfer(address(this), r, returned);\\n return returned;\\n }\\n```\\n\\nThis is how slippage is provided\\n```\\nuint128 returned = IMarketPlace(marketplace).sellUnderlying(\\n underlying,\\n maturity,\\n assets,\\n assets - (assets / 100)\\n );\\n```\\n\\nBut the problem is that assets it is amount of base tokens that user should pay for the shares he want to receive. Slippage should be calculated using shares amount user expect to get.\\nExample. User calls mint and provides amount 1000. That means that he wants to get 1000 principal tokens. While converting to assets, assets = 990. That means that user should pay 990 base tokens to get 1000 principal tokens. Then the `sellUnderlying` is send and slippage provided is `990*0.99=980.1`. So when something happens with price it's possible that user will receive 980.1 principal tokens instead of 1000 which is 2% lost.\\nTo fix this you should provide `s - (s / 100)` as slippage. | Use this.\\n```\\nuint128 returned = IMarketPlace(marketplace).sellUnderlying(\\n underlying,\\n maturity,\\n assets,\\n s- (s / 100)\\n );\\n```\\n | Lost of users funds. | ```\\n function mint(address r, uint256 s) external override returns (uint256) {\\n if (block.timestamp > maturity) {\\n revert Exception(\\n 21,\\n block.timestamp,\\n maturity,\\n address(0),\\n address(0)\\n );\\n }\\n uint128 assets = Cast.u128(previewMint(s));\\n Safe.transferFrom(\\n IERC20(underlying),\\n msg.sender,\\n address(this),\\n assets\\n );\\n // consider the hardcoded slippage limit, 4626 compliance requires no minimum param.\\n uint128 returned = IMarketPlace(marketplace).sellUnderlying(\\n underlying,\\n maturity,\\n assets,\\n assets - (assets / 100)\\n );\\n _transfer(address(this), r, returned);\\n return returned;\\n }\\n```\\n |
ERC5095.deposit doesn't check if received shares is less then provided amount | medium | `ERC5095.deposit` doesn't check if received shares is less then provided amount. In some cases this leads to lost of funds.\\nThe main thing with principal tokens is to buy them when the price is lower (you can buy 101 token while paying only 100 base tokens) as underlying price and then at maturity time to get interest(for example in one month you will get 1 base token in our case).\\n`ERC5095.deposit` function takes amount of base token that user wants to deposit and returns amount of shares that he received. To not have loses, the amount of shares should be at least bigger than amount of base tokens provided by user.\\n```\\n function deposit(address r, uint256 a) external override returns (uint256) {\\n if (block.timestamp > maturity) {\\n revert Exception(\\n 21,\\n block.timestamp,\\n maturity,\\n address(0),\\n address(0)\\n );\\n }\\n uint128 shares = Cast.u128(previewDeposit(a));\\n Safe.transferFrom(IERC20(underlying), msg.sender, address(this), a);\\n // consider the hardcoded slippage limit, 4626 compliance requires no minimum param.\\n uint128 returned = IMarketPlace(marketplace).sellUnderlying(\\n underlying,\\n maturity,\\n Cast.u128(a),\\n shares - (shares / 100)\\n );\\n _transfer(address(this), r, returned);\\n return returned;\\n }\\n```\\n\\nWhile calling market place, you can see that slippage of 1 percent is provided.\\n```\\nuint128 returned = IMarketPlace(marketplace).sellUnderlying(\\n underlying,\\n maturity,\\n Cast.u128(a),\\n shares - (shares / 100)\\n );\\n```\\n\\nBut this is not enough in some cases.\\nFor example we have `ERC5095` token with short maturity which provides `0.5%` of interests. userA calls `deposit` function with 1000 as base amount. He wants to get back 1005 share tokens. And after maturity time earn 5 tokens on this trade.\\nBut because of slippage set to `1%`, it's possible that the price will change and user will receive 995 share tokens instead of 1005, which means that user has lost 5 base tokens.\\nI propose to add one more mechanism except of slippage. We need to check if returned shares amount is bigger then provided assets amount. | Add this check at the end `require(returned > a, "received less than provided")` | Lost of funds. | ```\\n function deposit(address r, uint256 a) external override returns (uint256) {\\n if (block.timestamp > maturity) {\\n revert Exception(\\n 21,\\n block.timestamp,\\n maturity,\\n address(0),\\n address(0)\\n );\\n }\\n uint128 shares = Cast.u128(previewDeposit(a));\\n Safe.transferFrom(IERC20(underlying), msg.sender, address(this), a);\\n // consider the hardcoded slippage limit, 4626 compliance requires no minimum param.\\n uint128 returned = IMarketPlace(marketplace).sellUnderlying(\\n underlying,\\n maturity,\\n Cast.u128(a),\\n shares - (shares / 100)\\n );\\n _transfer(address(this), r, returned);\\n return returned;\\n }\\n```\\n |
Curve LP Controller withdraw and claim function uses wrong signature | medium | The function signature used for `WITHDRAWCLAIM` in both CurveLPStakingController.sol and BalancerLPStakingController.sol are incorrect, leading to the function not succeeding.\\nIn both the CurveLPStakingController.sol and BalancerLPStakingController.sol contracts, the function selector `0x00ebf5dd` is used for `WITHDRAWCLAIM`. This selector corresponds to a function signature of `withdraw(uint256,address,bool)`.\\n```\\nbytes4 constant WITHDRAWCLAIM = 0x00ebf5dd;\\n```\\n\\nHowever, the `withdraw()` function in the Curve contract does not have an address argument. Instead, the function signature reads `withdraw(uint256,bool)`, which corresponds to a function selector of `0x38d07436`. | Change the function selector in both contracts to `0x38d07436`. | Users who have deposited assets into Curve pools will not be able to claim their rewards when they withdraw their tokens. | ```\\nbytes4 constant WITHDRAWCLAIM = 0x00ebf5dd;\\n```\\n |
Strategist nonce is not checked | medium | Strategist nonce is not checked while checking commitment. This makes impossible for strategist to cancel signed commitment.\\n`VaultImplementation.commitToLien` is created to give the ability to borrow from the vault. The conditions of loan are discussed off chain and owner or delegate of the vault then creates and signes deal details. Later borrower can provide it as `IAstariaRouter.Commitment calldata params` param to `VaultImplementation.commitToLien`.\\nAfter the checking of signer of commitment `VaultImplementation._validateCommitment` function calls `AstariaRouter.validateCommitment`.\\n```\\n function validateCommitment(IAstariaRouter.Commitment calldata commitment)\\n public\\n returns (bool valid, IAstariaRouter.LienDetails memory ld)\\n {\\n require(\\n commitment.lienRequest.strategy.deadline >= block.timestamp,\\n "deadline passed"\\n );\\n\\n\\n require(\\n strategyValidators[commitment.lienRequest.nlrType] != address(0),\\n "invalid strategy type"\\n );\\n\\n\\n bytes32 leaf;\\n (leaf, ld) = IStrategyValidator(\\n strategyValidators[commitment.lienRequest.nlrType]\\n ).validateAndParse(\\n commitment.lienRequest,\\n COLLATERAL_TOKEN.ownerOf(\\n commitment.tokenContract.computeId(commitment.tokenId)\\n ),\\n commitment.tokenContract,\\n commitment.tokenId\\n );\\n\\n\\n return (\\n MerkleProof.verifyCalldata(\\n commitment.lienRequest.merkle.proof,\\n commitment.lienRequest.merkle.root,\\n leaf\\n ),\\n ld\\n );\\n }\\n```\\n\\nThis function check additional params, one of which is `commitment.lienRequest.strategy.deadline`. But it doesn't check for the nonce of strategist here. But this nonce is used while signing.\\nAlso `AstariaRouter` gives ability to increment nonce for strategist, but it is never called. That means that currently strategist use always same nonce and can't cancel his commitment. | Give ability to strategist to call `increaseNonce` function. | Strategist can't cancel his commitment. User can use this commitment to borrow up to 5 times. | ```\\n function validateCommitment(IAstariaRouter.Commitment calldata commitment)\\n public\\n returns (bool valid, IAstariaRouter.LienDetails memory ld)\\n {\\n require(\\n commitment.lienRequest.strategy.deadline >= block.timestamp,\\n "deadline passed"\\n );\\n\\n\\n require(\\n strategyValidators[commitment.lienRequest.nlrType] != address(0),\\n "invalid strategy type"\\n );\\n\\n\\n bytes32 leaf;\\n (leaf, ld) = IStrategyValidator(\\n strategyValidators[commitment.lienRequest.nlrType]\\n ).validateAndParse(\\n commitment.lienRequest,\\n COLLATERAL_TOKEN.ownerOf(\\n commitment.tokenContract.computeId(commitment.tokenId)\\n ),\\n commitment.tokenContract,\\n commitment.tokenId\\n );\\n\\n\\n return (\\n MerkleProof.verifyCalldata(\\n commitment.lienRequest.merkle.proof,\\n commitment.lienRequest.merkle.root,\\n leaf\\n ),\\n ld\\n );\\n }\\n```\\n |
The implied value of a public vault can be impaired, liquidity providers can lose funds | high | The implied value of a public vault can be impaired, liquidity providers can lose funds\\nBorrowers can partially repay their liens, which is handled by the `_payment` function (LienToken.sol#L594). When repaying a part of a lien, `lien.amount` is updated to include currently accrued debt (LienToken.sol#L605-L617):\\n```\\nLien storage lien = lienData[lienId];\\nlien.amount = _getOwed(lien); // @audit current debt, including accrued interest; saved to storage!\\n```\\n\\nNotice that `lien.amount` is updated in storage, and `lien.last` wasn't updated.\\nThen, lien's slope is subtracted from vault's slope accumulator to be re-calculated after the repayment (LienToken.sol#L620-L630):\\n```\\nif (isPublicVault) {\\n // @audit calculates and subtracts lien's slope from vault's slope\\n IPublicVault(lienOwner).beforePayment(lienId, paymentAmount);\\n}\\nif (lien.amount > paymentAmount) {\\n lien.amount -= paymentAmount;\\n // @audit lien.last is updated only after payment amount subtraction\\n lien.last = block.timestamp.safeCastTo32();\\n // slope does not need to be updated if paying off the rest, since we neutralize slope in beforePayment()\\n if (isPublicVault) {\\n // @audit re-calculates and re-applies lien's slope after the repayment\\n IPublicVault(lienOwner).afterPayment(lienId);\\n }\\n}\\n```\\n\\nIn the `beforePayment` function, `LIEN_TOKEN().calculateSlope(lienId)` is called to calculate lien's current slope (PublicVault.sol#L433-L442):\\n```\\nfunction beforePayment(uint256 lienId, uint256 amount) public onlyLienToken {\\n _handleStrategistInterestReward(lienId, amount);\\n uint256 lienSlope = LIEN_TOKEN().calculateSlope(lienId);\\n if (lienSlope > slope) {\\n slope = 0;\\n } else {\\n slope -= lienSlope;\\n }\\n last = block.timestamp;\\n}\\n```\\n\\nThe `calculateSlope` function reads a lien from storage and calls `_getOwed` again (LienToken.sol#L440-L445):\\n```\\nfunction calculateSlope(uint256 lienId) public view returns (uint256) {\\n // @audit lien.amount includes interest accrued so far\\n Lien memory lien = lienData[lienId];\\n uint256 end = (lien.start + lien.duration);\\n uint256 owedAtEnd = _getOwed(lien, end);\\n // @audit lien.last wasn't updated in `_payment`, it's an older timestamp\\n return (owedAtEnd - lien.amount).mulDivDown(1, end - lien.last);\\n}\\n```\\n\\nThis is where double counting of accrued interest happens. Recall that lien's amount already includes the interest that was accrued by this moment (in the `_payment` function). Now, interest is calculated again and is applied to the amount that already includes (a portion) it (LienToken.sol#L544-L550):\\n```\\nfunction _getOwed(Lien memory lien, uint256 timestamp)\\n internal\\n view\\n returns (uint256)\\n{\\n // @audit lien.amount already includes interest accrued so far\\n return lien.amount + _getInterest(lien, timestamp);\\n}\\n```\\n\\nLienToken.sol#L177-L196:\\n```\\nfunction _getInterest(Lien memory lien, uint256 timestamp)\\n internal\\n view\\n returns (uint256)\\n{\\n if (!lien.active) {\\n return uint256(0);\\n }\\n uint256 delta_t;\\n if (block.timestamp >= lien.start + lien.duration) {\\n delta_t = uint256(lien.start + lien.duration - lien.last);\\n } else {\\n // @audit lien.last wasn't updated in `_payment`, so the `delta_t` is bigger here\\n delta_t = uint256(timestamp.safeCastTo32() - lien.last);\\n }\\n return\\n // @audit rate applied to a longer delta_t and multiplied by a bigger amount than expected\\n delta_t.mulDivDown(lien.rate, 1).mulDivDown(\\n lien.amount,\\n INTEREST_DENOMINATOR\\n );\\n}\\n```\\n | In the `_payment` function, consider updating `lien.amount` after the `beforePayment` call:\\n```\\n// Remove the line below\\n// Remove the line below\\n// Remove the line below\\n a/src/LienToken.sol\\n// Add the line below\\n// Add the line below\\n// Add the line below\\n b/src/LienToken.sol\\n@@ // Remove the line below\\n614,12 // Add the line below\\n614,13 @@ contract LienToken is ERC721, ILienToken, Auth, TransferAgent {\\n type(IPublicVault).interfaceId\\n );\\n\\n// Remove the line below\\n lien.amount = _getOwed(lien);\\n// Remove the line below\\n\\n address payee = getPayee(lienId);\\n if (isPublicVault) {\\n IPublicVault(lienOwner).beforePayment(lienId, paymentAmount);\\n }\\n// Add the line below\\n\\n// Add the line below\\n lien.amount = _getOwed(lien);\\n// Add the line below\\n\\n if (lien.amount > paymentAmount) {\\n lien.amount // Remove the line below\\n= paymentAmount;\\n lien.last = block.timestamp.safeCastTo32();\\n```\\n\\nIn this case, lien's slope calculation won't be affected in the `beforePayment` call and the correct slope will be removed from the slope accumulator. | Double counting of interest will result in a wrong lien slope, which will affect the vault's slope accumulator. This will result in an invalid implied value of a vault (PublicVault.sol#L406-L413):\\nIf miscalculated lien slope is bigger than expected, vault's slope will be smaller than expected (due to the subtraction in beforePayment), and vault's implied value will also be smaller. Liquidity providers will lose money because they won't be able to redeem the whole liquidity (vault's implied value, `totalAssets`, is used in the conversion of LP shares, ERC4626-Cloned.sol#L392-L412)\\nIf miscalculated lien slope is smaller than expected, vault's slope will be higher, and vaults implied value will also be higher. However, it won't be backed by actual liquidity, thus the liquidity providers that exit earlier will get a bigger share of the underlying assets. The last liquidity provider won't be able to get their entire share. | ```\\nLien storage lien = lienData[lienId];\\nlien.amount = _getOwed(lien); // @audit current debt, including accrued interest; saved to storage!\\n```\\n |
buyoutLien() will cause the vault to fail to processEpoch() | high | LienToken#buyoutLien() did not reduce vault#liensOpenForEpoch when vault#processEpoch()will check vault#liensOpenForEpoch[currentEpoch] == uint256(0) so processEpoch() will fail\\nwhen create LienToken , vault#liensOpenForEpoch[currentEpoch] will ++ when repay or liquidate , vault#liensOpenForEpoch[currentEpoch] will -- and LienToken#buyoutLien() will transfer from vault to to other receiver,so liensOpenForEpoch need reduce\\n```\\nfunction buyoutLien(ILienToken.LienActionBuyout calldata params) external {\\n // rest of code.\\n /**** tranfer but not liensOpenForEpoch-- *****/\\n _transfer(ownerOf(lienId), address(params.receiver), lienId);\\n }\\n```\\n | Issue buyoutLien() will cause the vault to fail to processEpoch()\\n```\\n function buyoutLien(ILienToken.LienActionBuyout calldata params) external {\\n// rest of code.\\n\\n+ //do decreaseEpochLienCount()\\n+ address lienOwner = ownerOf(lienId);\\n+ bool isPublicVault = IPublicVault(lienOwner).supportsInterface(\\n+ type(IPublicVault).interfaceId\\n+ );\\n+ if (isPublicVault && !AUCTION_HOUSE.auctionExists(collateralId)) { \\n+ IPublicVault(lienOwner).decreaseEpochLienCount(\\n+ IPublicVault(lienOwner).getLienEpoch(lienData[lienId].start + lienData[lienId].duration)\\n+ );\\n+ } \\n\\n lienData[lienId].last = block.timestamp.safeCastTo32();\\n lienData[lienId].start = block.timestamp.safeCastTo32();\\n lienData[lienId].rate = ld.rate.safeCastTo240();\\n lienData[lienId].duration = ld.duration.safeCastTo32();\\n _transfer(ownerOf(lienId), address(params.receiver), lienId);\\n }\\n```\\n | processEpoch() maybe fail | ```\\nfunction buyoutLien(ILienToken.LienActionBuyout calldata params) external {\\n // rest of code.\\n /**** tranfer but not liensOpenForEpoch-- *****/\\n _transfer(ownerOf(lienId), address(params.receiver), lienId);\\n }\\n```\\n |
_deleteLienPosition can be called by anyone to delete any lien they wish | high | `_deleteLienPosition` is a public function that doesn't check the caller. This allows anyone to call it an remove whatever lien they wish from whatever collateral they wish\\n```\\nfunction _deleteLienPosition(uint256 collateralId, uint256 position) public {\\n uint256[] storage stack = liens[collateralId];\\n require(position < stack.length, "index out of bounds");\\n\\n emit RemoveLien(\\n stack[position],\\n lienData[stack[position]].collateralId,\\n lienData[stack[position]].position\\n );\\n for (uint256 i = position; i < stack.length - 1; i++) {\\n stack[i] = stack[i + 1];\\n }\\n stack.pop();\\n}\\n```\\n\\n`_deleteLienPosition` is a `public` function and doesn't validate that it's being called by any permissioned account. The result is that anyone can call it to delete any lien that they want. It wouldn't remove the lien data but it would remove it from the array associated with `collateralId`, which would allow it to pass the `CollateralToken.sol#releaseCheck` and the underlying to be withdrawn by the user. | Change `_deleteLienPosition` to `internal` rather than `public`. | All liens can be deleted completely rugging lenders | ```\\nfunction _deleteLienPosition(uint256 collateralId, uint256 position) public {\\n uint256[] storage stack = liens[collateralId];\\n require(position < stack.length, "index out of bounds");\\n\\n emit RemoveLien(\\n stack[position],\\n lienData[stack[position]].collateralId,\\n lienData[stack[position]].position\\n );\\n for (uint256 i = position; i < stack.length - 1; i++) {\\n stack[i] = stack[i + 1];\\n }\\n stack.pop();\\n}\\n```\\n |
Public vaults can become insolvent because of missing `yIntercept` update | high | The deduction of `yIntercept` during payments is missing in `beforePayment()` which can lead to vault insolvency.\\n`yIntercept` is declared as "sum of all LienToken amounts" and documented elsewhere as "yIntercept (virtual assets) of a PublicVault". It is used to calculate the total assets of a public vault as: slope.mulDivDown(delta_t, 1) + `yIntercept`.\\nIt is expected to be updated on deposits, payments, withdrawals, liquidations. However, the deduction of `yIntercept` during payments is missing in `beforePayment()`. As noted in the function's Natspec:\\n```\\n /**\\n * @notice Hook to update the slope and yIntercept of the PublicVault on payment.\\n * The rate for the LienToken is subtracted from the total slope of the PublicVault, and recalculated in afterPayment().\\n * @param lienId The ID of the lien.\\n * @param amount The amount paid off to deduct from the yIntercept of the PublicVault.\\n */\\n```\\n\\nthe amount of payment should be deducted from `yIntercept` but is missing. | Issue Public vaults can become insolvent because of missing `yIntercept` update\\nUpdate `yIntercept` in `beforePayment()` by the `amount` value. | This missing update will inflate the inferred value of the public vault corresponding to its actual value leading to eventual insolvency because of resulting protocol miscalculations. | ```\\n /**\\n * @notice Hook to update the slope and yIntercept of the PublicVault on payment.\\n * The rate for the LienToken is subtracted from the total slope of the PublicVault, and recalculated in afterPayment().\\n * @param lienId The ID of the lien.\\n * @param amount The amount paid off to deduct from the yIntercept of the PublicVault.\\n */\\n```\\n |
Bidder can cheat auction by placing bid much higher than reserve price when there are still open liens against a token | high | When a token still has open liens against it only the value of the liens will be paid by the bidder but their current bid will be set to the full value of the bid. This can be abused in one of two ways. The bidder could place a massive bid like 500 ETH that will never be outbid or they could place a bid they know will outbid and profit the difference when they're sent a refund.\\n```\\nuint256[] memory liens = LIEN_TOKEN.getLiens(tokenId);\\nuint256 totalLienAmount = 0;\\nif (liens.length > 0) {\\n for (uint256 i = 0; i < liens.length; ++i) {\\n uint256 payment;\\n uint256 lienId = liens[i];\\n\\n ILienToken.Lien memory lien = LIEN_TOKEN.getLien(lienId);\\n\\n if (transferAmount >= lien.amount) {\\n payment = lien.amount;\\n transferAmount -= payment;\\n } else {\\n payment = transferAmount;\\n transferAmount = 0;\\n }\\n if (payment > 0) {\\n LIEN_TOKEN.makePayment(tokenId, payment, lien.position, payer);\\n }\\n }\\n} else {\\n //@audit-issue logic skipped if liens.length > 0\\n TRANSFER_PROXY.tokenTransferFrom(\\n weth,\\n payer,\\n COLLATERAL_TOKEN.ownerOf(tokenId),\\n transferAmount\\n );\\n}\\n```\\n\\nWe can examine the payment logic inside `_handleIncomingPayment` and see that if there are still open liens against then only the amount of WETH to pay back the liens will be taken from the payer, since the else portion of the logic will be skipped.\\n```\\nuint256 vaultPayment = (amount - currentBid);\\n\\nif (firstBidTime == 0) {\\n auctions[tokenId].firstBidTime = block.timestamp.safeCastTo64();\\n} else if (lastBidder != address(0)) {\\n uint256 lastBidderRefund = amount - vaultPayment;\\n _handleOutGoingPayment(lastBidder, lastBidderRefund);\\n}\\n_handleIncomingPayment(tokenId, vaultPayment, address(msg.sender));\\n\\nauctions[tokenId].currentBid = amount;\\nauctions[tokenId].bidder = address(msg.sender);\\n```\\n\\nIn `createBid`, `auctions[tokenId].currentBid` is set to `amount` after the last bidder is refunded and the excess is paid against liens. We can walk through an example to illustrate this:\\nAssume a token with a single lien of amount 10 WETH and an auction is opened for that token. Now a user places a bid for 20 WETH. They are the first bidder so `lastBidder = address(0)` and `currentBid = 0`. `_handleIncomingPayment` will be called with a value of 20 WETH since there is no lastBidder to refund. Inside `_handleIncomingPayment` the lien information is read showing 1 lien against the token. Since `transferAmount >= lien.amount`, `payment = lien.amount`. A payment will be made by the bidder against the lien for 10 WETH. After the payment `_handleIncomingPayment` will return only having taken 10 WETH from the bidder. In the next line currentBid is set to 20 WETH but the bidder has only paid 10 WETH. Now if they are outbid, the new bidder will have to refund then 20 WETH even though they initially only paid 10 WETH. | In `_handleIncomingPayment`, all residual transfer amount should be sent to `COLLATERAL_TOKEN.ownerOf(tokenId)`. | Bidder can steal funds due to `_handleIncomingPayment` not taking enough WETH | ```\\nuint256[] memory liens = LIEN_TOKEN.getLiens(tokenId);\\nuint256 totalLienAmount = 0;\\nif (liens.length > 0) {\\n for (uint256 i = 0; i < liens.length; ++i) {\\n uint256 payment;\\n uint256 lienId = liens[i];\\n\\n ILienToken.Lien memory lien = LIEN_TOKEN.getLien(lienId);\\n\\n if (transferAmount >= lien.amount) {\\n payment = lien.amount;\\n transferAmount -= payment;\\n } else {\\n payment = transferAmount;\\n transferAmount = 0;\\n }\\n if (payment > 0) {\\n LIEN_TOKEN.makePayment(tokenId, payment, lien.position, payer);\\n }\\n }\\n} else {\\n //@audit-issue logic skipped if liens.length > 0\\n TRANSFER_PROXY.tokenTransferFrom(\\n weth,\\n payer,\\n COLLATERAL_TOKEN.ownerOf(tokenId),\\n transferAmount\\n );\\n}\\n```\\n |
Possible to fully block PublicVault.processEpoch function. No one will be able to receive their funds | high | Possible to fully block `PublicVault.processEpoch` function. No one will be able to receive their funds\\nWhen liquidity providers want to redeem their share from `PublicVault` they call `redeemFutureEpoch` function which will create new `WithdrawProxy` for the epoch(if not created already) and then mint shares for redeemer in `WithdrawProxy`. `PublicVault` transfer user's shares to himself.\\n```\\n function redeemFutureEpoch(\\n uint256 shares,\\n address receiver,\\n address owner,\\n uint64 epoch\\n ) public virtual returns (uint256 assets) {\\n // check to ensure that the requested epoch is not the current epoch or in the past\\n require(epoch >= currentEpoch, "Exit epoch too low");\\n\\n\\n require(msg.sender == owner, "Only the owner can redeem");\\n // check for rounding error since we round down in previewRedeem.\\n\\n\\n ERC20(address(this)).safeTransferFrom(owner, address(this), shares);\\n\\n\\n // Deploy WithdrawProxy if no WithdrawProxy exists for the specified epoch\\n _deployWithdrawProxyIfNotDeployed(epoch);\\n\\n\\n emit Withdraw(msg.sender, receiver, owner, assets, shares);\\n\\n\\n // WithdrawProxy shares are minted 1:1 with PublicVault shares\\n WithdrawProxy(withdrawProxies[epoch]).mint(receiver, shares); // was withdrawProxies[withdrawEpoch]\\n }\\n```\\n\\nThis function mints `WithdrawProxy` shares 1:1 to redeemed `PublicVault` shares. Then later after call of `processEpoch` and `transferWithdrawReserve` the funds will be sent to the `WithdrawProxy` and users can now redeem their shares from it.\\nFunction `processEpoch` decides how many funds should be sent to the `WithdrawProxy`.\\n```\\n if (withdrawProxies[currentEpoch] != address(0)) {\\n uint256 proxySupply = WithdrawProxy(withdrawProxies[currentEpoch])\\n .totalSupply();\\n\\n\\n liquidationWithdrawRatio = proxySupply.mulDivDown(1e18, totalSupply());\\n\\n\\n if (liquidationAccountants[currentEpoch] != address(0)) {\\n LiquidationAccountant(liquidationAccountants[currentEpoch])\\n .setWithdrawRatio(liquidationWithdrawRatio);\\n }\\n\\n\\n uint256 withdrawAssets = convertToAssets(proxySupply);\\n // compute the withdrawReserve\\n uint256 withdrawLiquidations = liquidationsExpectedAtBoundary[\\n currentEpoch\\n ].mulDivDown(liquidationWithdrawRatio, 1e18);\\n withdrawReserve = withdrawAssets - withdrawLiquidations;\\n // burn the tokens of the LPs withdrawing\\n _burn(address(this), proxySupply);\\n\\n\\n _decreaseYIntercept(withdrawAssets);\\n }\\n```\\n\\nThis is how it is decided how much money should be sent to WithdrawProxy. Firstly, we look at totalSupply of WithdrawProxy. `uint256 proxySupply = WithdrawProxy(withdrawProxies[currentEpoch]).totalSupply();`.\\nAnd then we convert them to assets amount. `uint256 withdrawAssets = convertToAssets(proxySupply);`\\nIn the end function burns `proxySupply` amount of shares controlled by PublicVault. `_burn(address(this), proxySupply);`\\nThen this amount is allowed to be sent(if no auctions currently, but this is not important right now).\\nThis all allows to attacker to make `WithdrawProxy.deposit` to mint new shares for him and increase totalSupply of WithdrawProxy, so `proxySupply` becomes more then was sent to `PublicVault`.\\nThis is attack scenario.\\n1.PublicVault is created and funded with 50 ethers. 2.Someone calls `redeemFutureEpoch` function to create new WithdrawProxy for next epoch. 3.Attacker sends 1 wei to WithdrawProxy to make totalAssets be > 0. Attacker deposit to WithdrawProxy 1 wei. Now WithdrawProxy.totalSupply > PublicVault.balanceOf(PublicVault). 4.Someone call `processEpoch` and it reverts on burning.\\nAs result, nothing will be send to WithdrawProxy where shares were minted for users. The just lost money.\\nAlso this attack can be improved to drain users funds to attacker. Attacker should be liquidity provider. And he can initiate next redeem for next epoch, then deposit to new WithdrawProxy enough amount to get new shares. And call `processEpoch` which will send to the vault amount, that was not sent to previous attacked WithdrawProxy, as well. So attacker will take those funds. | Make function WithdrawProxy.deposit not callable. | Funds of PublicVault depositors are stolen. | ```\\n function redeemFutureEpoch(\\n uint256 shares,\\n address receiver,\\n address owner,\\n uint64 epoch\\n ) public virtual returns (uint256 assets) {\\n // check to ensure that the requested epoch is not the current epoch or in the past\\n require(epoch >= currentEpoch, "Exit epoch too low");\\n\\n\\n require(msg.sender == owner, "Only the owner can redeem");\\n // check for rounding error since we round down in previewRedeem.\\n\\n\\n ERC20(address(this)).safeTransferFrom(owner, address(this), shares);\\n\\n\\n // Deploy WithdrawProxy if no WithdrawProxy exists for the specified epoch\\n _deployWithdrawProxyIfNotDeployed(epoch);\\n\\n\\n emit Withdraw(msg.sender, receiver, owner, assets, shares);\\n\\n\\n // WithdrawProxy shares are minted 1:1 with PublicVault shares\\n WithdrawProxy(withdrawProxies[epoch]).mint(receiver, shares); // was withdrawProxies[withdrawEpoch]\\n }\\n```\\n |
Any public vault without a delegate can be drained | high | If a public vault is created without a delegate, delegate will have the value of `address(0)`. This is also the value returned by `ecrecover` for invalid signatures (for example, if v is set to a position number that is not 27 or 28), which allows a malicious actor to cause the signature validation to pass for arbitrary parameters, allowing them to drain a vault using a worthless NFT as collateral.\\nWhen a new Public Vault is created, the Router calls the `init()` function on the vault as follows:\\n```\\nVaultImplementation(vaultAddr).init(\\n VaultImplementation.InitParams(delegate)\\n);\\n```\\n\\nIf a delegate wasn't set, this will pass `address(0)` to the vault. If this value is passed, the vault simply skips the assignment, keeping the delegate variable set to the default 0 value:\\n```\\nif (params.delegate != address(0)) {\\n delegate = params.delegate;\\n}\\n```\\n\\nOnce the delegate is set to the zero address, any commitment can be validated, even if the signature is incorrect. This is because of a quirk in `ecrecover` which returns `address(0)` for invalid signatures. A signature can be made invalid by providing a positive integer that is not 27 or 28 as the `v` value. The result is that the following function call assigns recovered = address(0):\\n```\\n address recovered = ecrecover(\\n keccak256(\\n encodeStrategyData(\\n params.lienRequest.strategy,\\n params.lienRequest.merkle.root\\n )\\n ),\\n params.lienRequest.v,\\n params.lienRequest.r,\\n params.lienRequest.s\\n );\\n```\\n\\nTo confirm the validity of the signature, the function performs two checks:\\n```\\nrequire(\\n recovered == params.lienRequest.strategy.strategist,\\n "strategist must match signature"\\n);\\nrequire(\\n recovered == owner() || recovered == delegate,\\n "invalid strategist"\\n);\\n```\\n\\nThese can be easily passed by setting the `strategist` in the params to `address(0)`. At this point, all checks will pass and the parameters will be accepted as approved by the vault.\\nWith this power, a borrower can create params that allow them to borrow the vault's full funds in exchange for a worthless NFT, allowing them to drain the vault and steal all the user's funds. | Issue Any public vault without a delegate can be drained\\nAdd a require statement that the recovered address cannot be the zero address:\\n```\\nrequire(recovered != address(0));\\n```\\n | All user's funds held in a vault with no delegate set can be stolen. | ```\\nVaultImplementation(vaultAddr).init(\\n VaultImplementation.InitParams(delegate)\\n);\\n```\\n |
Auctions can end in epoch after intended, underpaying withdrawers | high | When liens are liquidated, the router checks if the auction will complete in a future epoch and, if it does, sets up a liquidation accountant and other logistics to account for it. However, the check for auction completion does not take into account extended auctions, which can therefore end in an unexpected epoch and cause accounting issues, losing user funds.\\nThe liquidate() function performs the following check to determine if it should set up the liquidation to be paid out in a future epoch:\\n```\\nif (PublicVault(owner).timeToEpochEnd() <= COLLATERAL_TOKEN.auctionWindow())\\n```\\n\\nThis function assumes that the auction will only end in a future epoch if the `auctionWindow` (typically set to 2 days) pushes us into the next epoch.\\nHowever, auctions can last up to an additional 1 day if bids are made within the final 15 minutes. In these cases, auctions are extended repeatedly, up to a maximum of 1 day.\\n```\\nif (firstBidTime + duration - block.timestamp < timeBuffer) {\\n uint64 newDuration = uint256(\\n duration + (block.timestamp + timeBuffer - firstBidTime)\\n ).safeCastTo64();\\n if (newDuration <= auctions[tokenId].maxDuration) {\\n auctions[tokenId].duration = newDuration;\\n } else {\\n auctions[tokenId].duration =\\n auctions[tokenId].maxDuration -\\n firstBidTime;\\n }\\n extended = true;\\n}\\n```\\n\\nThe result is that there are auctions for which accounting is set up for them to end in the current epoch, but will actual end in the next epoch. | Change the check to take the possibility of extension into account:\\n```\\nif (PublicVault(owner).timeToEpochEnd() <= COLLATERAL_TOKEN.auctionWindow() + 1 days)\\n```\\n | Users who withdrew their funds in the current epoch, who are entitled to a share of the auction's proceeds, will not be paid out fairly. | ```\\nif (PublicVault(owner).timeToEpochEnd() <= COLLATERAL_TOKEN.auctionWindow())\\n```\\n |
Strategists are paid 10x the vault fee because of a math error | high | Strategists set their vault fee in BPS (x / 10,000), but are paid out as x / 1,000. The result is that strategists will always earn 10x whatever vault fee they set.\\nWhenever any payment is made towards a public vault, `beforePayment()` is called, which calls `_handleStrategistInterestReward()`.\\nThe function is intended to take the amount being paid, adjust by the vault fee to get the fee amount, and convert that amount of value into shares, which are added to `strategistUnclaimedShares`.\\n```\\nfunction _handleStrategistInterestReward(uint256 lienId, uint256 amount)\\n internal\\n virtual\\n override\\n {\\n if (VAULT_FEE() != uint256(0)) {\\n uint256 interestOwing = LIEN_TOKEN().getInterest(lienId);\\n uint256 x = (amount > interestOwing) ? interestOwing : amount;\\n uint256 fee = x.mulDivDown(VAULT_FEE(), 1000);\\n strategistUnclaimedShares += convertToShares(fee);\\n }\\n }\\n```\\n\\nSince the vault fee is stored in basis points, to get the vault fee, we should take the amount, multiply it by `VAULT_FEE()` and divide by 10,000. However, we accidentally divide by 1,000, which results in a 10x larger reward for the strategist than intended.\\nAs an example, if the vault fee is intended to be 10%, we would set `VAULT_FEE = 1000`. In that case, for any amount paid off, we would calculate `fee = amount * 1000 / 1000` and the full amount would be considered a fee for the strategist. | Change the `1000` in the `_handleStrategistInterestReward()` function to `10_000`. | Strategists will be paid 10x the agreed upon rate for their role, with the cost being borne by users. | ```\\nfunction _handleStrategistInterestReward(uint256 lienId, uint256 amount)\\n internal\\n virtual\\n override\\n {\\n if (VAULT_FEE() != uint256(0)) {\\n uint256 interestOwing = LIEN_TOKEN().getInterest(lienId);\\n uint256 x = (amount > interestOwing) ? interestOwing : amount;\\n uint256 fee = x.mulDivDown(VAULT_FEE(), 1000);\\n strategistUnclaimedShares += convertToShares(fee);\\n }\\n }\\n```\\n |
Claiming liquidationAccountant will reduce vault y-intercept by more than the correct amount | high | When `claim()` is called on the Liquidation Accountant, it decreases the y-intercept based on the balance of the contract after funds have been distributed, rather than before. The result is that the y-intercept will be decreased more than it should be, siphoning funds from all users.\\nWhen `LiquidationAccountant.sol:claim()` is called, it uses its `withdrawRatio` to send some portion of its earnings to the `WITHDRAW_PROXY` and the rest to the vault.\\nAfter performing these transfers, it updates the vault's y-intercept, decreasing it by the gap between the expected return from the auction, and the reality of how much was sent back to the vault:\\n```\\nPublicVault(VAULT()).decreaseYIntercept(\\n (expected - ERC20(underlying()).balanceOf(address(this))).mulDivDown(\\n 1e18 - withdrawRatio,\\n 1e18\\n )\\n);\\n```\\n\\nThis rebalancing uses the balance of the `liquidationAccountant` to perform its calculation, but it is done after the balance has already been distributed, so it will always be 0.\\nLooking at an example:\\n`expected = 1 ether` (meaning the y-intercept is currently based on this value)\\n`withdrawRatio = 0` (meaning all funds will go back to the vault)\\nThe auction sells for exactly 1 ether\\n1 ether is therefore sent directly to the vault\\nIn this case, the y-intercept should not be updated, as the outcome was equal to the `expected` outcome\\nHowever, because the calculation above happens after the funds are distributed, the decrease equals `(expected - 0) * 1e18 / 1e18`, which equals `expected`\\nThat decrease should not happen, and causing problems for the protocol's accounting. For example, when `withdraw()` is called, it uses the y-intercept in its calculation of the `totalAssets()` held by the vault, creating artificially low asset values for a given number of shares. | The amount of assets sent to the vault has already been calculated, as we've already sent it. Therefore, rather than the full existing formula, we can simply call:\\n```\\nPublicVault(VAULT()).decreaseYIntercept(expected - balance)\\n```\\n\\nAlternatively, we can move the current code above the block of code that transfers funds out (L73). | Every time the liquidation accountant is used, the vault's math will be thrown off and user shares will be falsely diluted. | ```\\nPublicVault(VAULT()).decreaseYIntercept(\\n (expected - ERC20(underlying()).balanceOf(address(this))).mulDivDown(\\n 1e18 - withdrawRatio,\\n 1e18\\n )\\n);\\n```\\n |
liquidationAccountant can be claimed at any time | high | New liquidations are sent to the `liquidationAccountant` with a `finalAuctionTimestamp` value, but the actual value that is passed in is simply the duration of an auction. The `claim()` function uses this value in a require check, so this error will allow it to be called before the auction is complete.\\nWhen a lien is liquidated, `AstariaRouter.sol:liquidate()` is called. If the lien is set to end in a future epoch, we call `handleNewLiquidation()` on the `liquidationAccountant`.\\nOne of the values passed in this call is the `finalAuctionTimestamp`, which updates the `finalAuctionEnd` variable in the `liquidationAccountant`. This value is then used to protect the `claim()` function from being called too early.\\nHowever, when the router calls `handleLiquidationAccountant()`, it passes the duration of an auction rather than the final timestamp:\\n```\\nLiquidationAccountant(accountant).handleNewLiquidation(\\n lien.amount,\\n COLLATERAL_TOKEN.auctionWindow() + 1 days\\n);\\n```\\n\\nAs a result, `finalAuctionEnd` will be set to 259200 (3 days).\\nWhen `claim()` is called, it requires the final auction to have ended for the function to be called:\\n```\\nrequire(\\n block.timestamp > finalAuctionEnd || finalAuctionEnd == uint256(0),\\n "final auction has not ended"\\n);\\n```\\n\\nBecause of the error above, `block.timestamp` will always be greater than `finalAuctionEnd`, so this will always be permitted. | Adjust the call from the router to use the ending timestamp as the argument, rather than the duration:\\n```\\nLiquidationAccountant(accountant).handleNewLiquidation(\\n lien.amount,\\n block.timestamp + COLLATERAL_TOKEN.auctionWindow() + 1 days\\n);\\n```\\n | Anyone can call `claim()` before an auction has ended. This can cause many problems, but the clearest is that it can ruin the protocol's accounting by decreasing the Y intercept of the vault.\\nFor example, if `claim()` is called before the auction, the returned value will be 0, so the Y intercept will be decreased as if there was an auction that returned no funds. | ```\\nLiquidationAccountant(accountant).handleNewLiquidation(\\n lien.amount,\\n COLLATERAL_TOKEN.auctionWindow() + 1 days\\n);\\n```\\n |
Incorrect fees will be charged | high | If user has provided transferAmount which is greater than all lien.amount combined then initiatorPayment will be incorrect since it is charged on full amount when only partial was used as shown in poc\\nObserve the _handleIncomingPayment function\\nLets say transferAmount was 1000\\ninitiatorPayment is calculated on this full transferAmount\\n```\\nuint256 initiatorPayment = transferAmount.mulDivDown(\\n auction.initiatorFee,\\n 100\\n ); \\n```\\n\\nNow all lien are iterated and lien.amount is kept on deducting from transferAmount until all lien are navigated\\n```\\nif (transferAmount >= lien.amount) {\\n payment = lien.amount;\\n transferAmount -= payment;\\n } else {\\n payment = transferAmount;\\n transferAmount = 0;\\n }\\n\\n if (payment > 0) {\\n LIEN_TOKEN.makePayment(tokenId, payment, lien.position, payer);\\n }\\n }\\n```\\n\\nLets say after loop completes the transferAmount is still left as 100\\nThis means only 400 transferAmount was used but fees was deducted on full amount 500 | Calculate the exact amount of transfer amount required for the transaction and calculate the initiator fee based on this amount | Excess initiator fees will be deducted which was not required | ```\\nuint256 initiatorPayment = transferAmount.mulDivDown(\\n auction.initiatorFee,\\n 100\\n ); \\n```\\n |
isValidRefinance checks both conditions instead of one, leading to rejection of valid refinances | high | `isValidRefinance()` is intended to check whether either (a) the loan interest rate decreased sufficiently or (b) the loan duration increased sufficiently. Instead, it requires both of these to be true, leading to the rejection of valid refinances.\\nWhen trying to buy out a lien from `LienToken.sol:buyoutLien()`, the function calls `AstariaRouter.sol:isValidRefinance()` to check whether the refi terms are valid.\\n```\\nif (!ASTARIA_ROUTER.isValidRefinance(lienData[lienId], ld)) {\\n revert InvalidRefinance();\\n}\\n```\\n\\nOne of the roles of this function is to check whether the rate decreased by more than 0.5%. From the docs:\\nAn improvement in terms is considered if either of these conditions is met:\\nThe loan interest rate decrease by more than 0.5%.\\nThe loan duration increases by more than 14 days.\\nThe currently implementation of the code requires both of these conditions to be met:\\n```\\nreturn (\\n newLien.rate >= minNewRate &&\\n ((block.timestamp + newLien.duration - lien.start - lien.duration) >= minDurationIncrease)\\n);\\n```\\n | Change the AND in the return statement to an OR:\\n```\\nreturn (\\n newLien.rate >= minNewRate ||\\n ((block.timestamp + newLien.duration - lien.start - lien.duration) >= minDurationIncrease)\\n);\\n```\\n | Valid refinances that meet one of the two criteria will be rejected. | ```\\nif (!ASTARIA_ROUTER.isValidRefinance(lienData[lienId], ld)) {\\n revert InvalidRefinance();\\n}\\n```\\n |
isValidRefinance will approve invalid refinances and reject valid refinances due to buggy math | high | The math in `isValidRefinance()` checks whether the rate increased rather than decreased, resulting in invalid refinances being approved and valid refinances being rejected.\\nWhen trying to buy out a lien from `LienToken.sol:buyoutLien()`, the function calls `AstariaRouter.sol:isValidRefinance()` to check whether the refi terms are valid.\\n```\\nif (!ASTARIA_ROUTER.isValidRefinance(lienData[lienId], ld)) {\\n revert InvalidRefinance();\\n}\\n```\\n\\nOne of the roles of this function is to check whether the rate decreased by more than 0.5%. From the docs:\\nAn improvement in terms is considered if either of these conditions is met:\\nThe loan interest rate decrease by more than 0.5%.\\nThe loan duration increases by more than 14 days.\\nThe current implementation of the function does the opposite. It calculates a `minNewRate` (which should be maxNewRate) and then checks whether the new rate is greater than that value.\\n```\\nuint256 minNewRate = uint256(lien.rate) - minInterestBPS;\\nreturn (newLien.rate >= minNewRate // rest of code\\n```\\n\\nThe result is that if the new rate has increased (or decreased by less than 0.5%), it will be considered valid, but if it has decreased by more than 0.5% (the ideal behavior) it will be rejected as invalid. | Flip the logic used to check the rate to the following:\\n```\\nuint256 maxNewRate = uint256(lien.rate) - minInterestBPS;\\nreturn (newLien.rate <= maxNewRate// rest of code\\n```\\n | Users can perform invalid refinances with the wrong parameters.\\nUsers who should be able to perform refinances at better rates will not be able to. | ```\\nif (!ASTARIA_ROUTER.isValidRefinance(lienData[lienId], ld)) {\\n revert InvalidRefinance();\\n}\\n```\\n |
new loans "max duration" is not restricted | medium | document : " Epochs PublicVaults operate around a time-based epoch system. An epoch length is defined by the strategist that deploys the PublicVault. The duration of new loans is restricted to not exceed the end of the next epoch. For example, if a PublicVault is 15 days into a 30-day epoch, new loans must not be longer than 45 days. " but more than 2 epoch's duration can be added\\nthe max duration is not detected. add success when > next epoch\\n#AstariaTest#testBasicPublicVaultLoan\\n```\\n function testBasicPublicVaultLoan() public {\\n\\n IAstariaRouter.LienDetails memory standardLien2 =\\n IAstariaRouter.LienDetails({\\n maxAmount: 50 ether,\\n rate: (uint256(1e16) * 150) / (365 days),\\n duration: 50 days, /****** more then 14 * 2 *******/\\n maxPotentialDebt: 50 ether\\n }); \\n\\n _commitToLien({\\n vault: publicVault,\\n strategist: strategistOne,\\n strategistPK: strategistOnePK,\\n tokenContract: tokenContract,\\n tokenId: tokenId,\\n lienDetails: standardLien2, /**** use standardLien2 ****/\\n amount: 10 ether,\\n isFirstLien: true\\n });\\n }\\n```\\n | PublicVault#_afterCommitToLien\\n```\\n function _afterCommitToLien(uint256 lienId, uint256 amount)\\n internal\\n virtual\\n override\\n {\\n // increment slope for the new lien\\n unchecked {\\n slope += LIEN_TOKEN().calculateSlope(lienId);\\n }\\n\\n ILienToken.Lien memory lien = LIEN_TOKEN().getLien(lienId);\\n\\n uint256 epoch = Math.ceilDiv(\\n lien.start + lien.duration - START(),\\n EPOCH_LENGTH()\\n ) - 1;\\n\\n+ require(epoch <= currentEpoch + 1,"epoch max <= currentEpoch + 1");\\n\\n liensOpenForEpoch[epoch]++;\\n emit LienOpen(lienId, epoch);\\n }\\n```\\n | Too long duration | ```\\n function testBasicPublicVaultLoan() public {\\n\\n IAstariaRouter.LienDetails memory standardLien2 =\\n IAstariaRouter.LienDetails({\\n maxAmount: 50 ether,\\n rate: (uint256(1e16) * 150) / (365 days),\\n duration: 50 days, /****** more then 14 * 2 *******/\\n maxPotentialDebt: 50 ether\\n }); \\n\\n _commitToLien({\\n vault: publicVault,\\n strategist: strategistOne,\\n strategistPK: strategistOnePK,\\n tokenContract: tokenContract,\\n tokenId: tokenId,\\n lienDetails: standardLien2, /**** use standardLien2 ****/\\n amount: 10 ether,\\n isFirstLien: true\\n });\\n }\\n```\\n |
_makePayment is logically inconsistent with how lien stack is managed causing payments to multiple liens to fail | medium | `_makePayment(uint256, uint256)` looping logic is inconsistent with how `_deleteLienPosition` manages the lien stack. `_makePayment` loops from 0 to `openLiens.length` but `_deleteLienPosition` (called when a lien is fully paid off) actively compresses the lien stack. When a payment pays off multiple liens the compressing effect causes an array OOB error towards the end of the loop.\\n```\\nfunction _makePayment(uint256 collateralId, uint256 totalCapitalAvailable)\\n internal\\n{\\n uint256[] memory openLiens = liens[collateralId];\\n uint256 paymentAmount = totalCapitalAvailable;\\n for (uint256 i = 0; i < openLiens.length; ++i) {\\n uint256 capitalSpent = _payment(\\n collateralId,\\n uint8(i),\\n paymentAmount,\\n address(msg.sender)\\n );\\n paymentAmount -= capitalSpent;\\n }\\n}\\n```\\n\\n`LienToken.sol#_makePayment(uint256, uint256)` loops from 0 to `openLiens.Length`. This loop attempts to make a payment to each lien calling `_payment` with the current index of the loop.\\n```\\nfunction _deleteLienPosition(uint256 collateralId, uint256 position) public {\\n uint256[] storage stack = liens[collateralId];\\n require(position < stack.length, "index out of bounds");\\n\\n emit RemoveLien(\\n stack[position],\\n lienData[stack[position]].collateralId,\\n lienData[stack[position]].position\\n );\\n for (uint256 i = position; i < stack.length - 1; i++) {\\n stack[i] = stack[i + 1];\\n }\\n stack.pop();\\n}\\n```\\n\\n`LienToken.sol#_deleteLienPosition` is called on liens when they are fully paid off. The most interesting portion of the function is how the lien is removed from the stack. We can see that all liens above the lien in question are slid down the stack and the top is popped. This has the effect of reducing the total length of the array. This is where the logical inconsistency is. If the first lien is paid off, it will be removed and the formerly second lien will now occupy it's index. So then when `_payment` is called in the next loop with the next index it won't reference the second lien since the second lien is now in the first lien index.\\nAssuming there are 2 liens on some collateral. `liens[0].amount = 100` and `liens[1].amount = 50`. A user wants to pay off their entire lien balance so they call `_makePayment(uint256, uint256)` with an amount of 150. On the first loop it calls `_payment` with an index of 0. This pays off `liens[0]`. `_deleteLienPosition` is called with index of 0 removing `liens[0]`. Because of the sliding logic in `_deleteLienPosition` `lien[1]` has now slid into the `lien[0]` position. On the second loop it calls `_payment` with an index of 1. When it tries to grab the data for the lien at that index it will revert due to OOB error because the array no long contains an index of 1. | Payment logic inside of `AuctionHouse.sol` works. `_makePayment` should be changed to mimic that logic. | Large payment are impossible and user must manually pay off each liens separately | ```\\nfunction _makePayment(uint256 collateralId, uint256 totalCapitalAvailable)\\n internal\\n{\\n uint256[] memory openLiens = liens[collateralId];\\n uint256 paymentAmount = totalCapitalAvailable;\\n for (uint256 i = 0; i < openLiens.length; ++i) {\\n uint256 capitalSpent = _payment(\\n collateralId,\\n uint8(i),\\n paymentAmount,\\n address(msg.sender)\\n );\\n paymentAmount -= capitalSpent;\\n }\\n}\\n```\\n |
LienToken._payment function increases users debt | medium | LienToken._payment function increases users debt by setting `lien.amount = _getOwed(lien)`\\n`LienToken._payment` is used by `LienToken.makePayment` function that allows borrower to repay part or all his debt.\\nAlso this function can be called by `AuctionHouse` when the lien is liquidated.\\n```\\n function _payment(\\n uint256 collateralId,\\n uint8 position,\\n uint256 paymentAmount,\\n address payer\\n ) internal returns (uint256) {\\n if (paymentAmount == uint256(0)) {\\n return uint256(0);\\n }\\n\\n\\n uint256 lienId = liens[collateralId][position];\\n Lien storage lien = lienData[lienId];\\n uint256 end = (lien.start + lien.duration);\\n require(\\n block.timestamp < end || address(msg.sender) == address(AUCTION_HOUSE),\\n "cannot pay off an expired lien"\\n );\\n\\n\\n address lienOwner = ownerOf(lienId);\\n bool isPublicVault = IPublicVault(lienOwner).supportsInterface(\\n type(IPublicVault).interfaceId\\n );\\n\\n\\n lien.amount = _getOwed(lien);\\n\\n\\n address payee = getPayee(lienId);\\n if (isPublicVault) {\\n IPublicVault(lienOwner).beforePayment(lienId, paymentAmount);\\n }\\n if (lien.amount > paymentAmount) {\\n lien.amount -= paymentAmount;\\n lien.last = block.timestamp.safeCastTo32();\\n // slope does not need to be updated if paying off the rest, since we neutralize slope in beforePayment()\\n if (isPublicVault) {\\n IPublicVault(lienOwner).afterPayment(lienId);\\n }\\n } else {\\n if (isPublicVault && !AUCTION_HOUSE.auctionExists(collateralId)) {\\n // since the openLiens count is only positive when there are liens that haven't been paid off\\n // that should be liquidated, this lien should not be counted anymore\\n IPublicVault(lienOwner).decreaseEpochLienCount(\\n IPublicVault(lienOwner).getLienEpoch(end)\\n );\\n }\\n //delete liens\\n _deleteLienPosition(collateralId, position);\\n delete lienData[lienId]; //full delete\\n\\n\\n _burn(lienId);\\n }\\n\\n\\n TRANSFER_PROXY.tokenTransferFrom(WETH, payer, payee, paymentAmount);\\n\\n\\n emit Payment(lienId, paymentAmount);\\n return paymentAmount;\\n }\\n```\\n\\nHere lien.amount becomes lien.amount + accrued interests, because `_getOwed` do that calculation.\\n`lien.amount` is the amount that user borrowed. So actually that line has just increased user's debt. And in case if he didn't pay all amount of lien, then next time he will pay more interests.\\nExample. User borrows 1 eth. His `lien.amount` is 1eth. Then he wants to repay some part(let's say 0.5 eth). Now his `lien.amount` becomes `lien.amount + interests`. When he pays next time, he pays `(lien.amount + interests) + new interests`. So interests are acummulated on previous interests. | Issue LienToken._payment function increases users debt\\nDo not update lien.amount to _getOwed(lien). | User borrowed amount increases and leads to lose of funds. | ```\\n function _payment(\\n uint256 collateralId,\\n uint8 position,\\n uint256 paymentAmount,\\n address payer\\n ) internal returns (uint256) {\\n if (paymentAmount == uint256(0)) {\\n return uint256(0);\\n }\\n\\n\\n uint256 lienId = liens[collateralId][position];\\n Lien storage lien = lienData[lienId];\\n uint256 end = (lien.start + lien.duration);\\n require(\\n block.timestamp < end || address(msg.sender) == address(AUCTION_HOUSE),\\n "cannot pay off an expired lien"\\n );\\n\\n\\n address lienOwner = ownerOf(lienId);\\n bool isPublicVault = IPublicVault(lienOwner).supportsInterface(\\n type(IPublicVault).interfaceId\\n );\\n\\n\\n lien.amount = _getOwed(lien);\\n\\n\\n address payee = getPayee(lienId);\\n if (isPublicVault) {\\n IPublicVault(lienOwner).beforePayment(lienId, paymentAmount);\\n }\\n if (lien.amount > paymentAmount) {\\n lien.amount -= paymentAmount;\\n lien.last = block.timestamp.safeCastTo32();\\n // slope does not need to be updated if paying off the rest, since we neutralize slope in beforePayment()\\n if (isPublicVault) {\\n IPublicVault(lienOwner).afterPayment(lienId);\\n }\\n } else {\\n if (isPublicVault && !AUCTION_HOUSE.auctionExists(collateralId)) {\\n // since the openLiens count is only positive when there are liens that haven't been paid off\\n // that should be liquidated, this lien should not be counted anymore\\n IPublicVault(lienOwner).decreaseEpochLienCount(\\n IPublicVault(lienOwner).getLienEpoch(end)\\n );\\n }\\n //delete liens\\n _deleteLienPosition(collateralId, position);\\n delete lienData[lienId]; //full delete\\n\\n\\n _burn(lienId);\\n }\\n\\n\\n TRANSFER_PROXY.tokenTransferFrom(WETH, payer, payee, paymentAmount);\\n\\n\\n emit Payment(lienId, paymentAmount);\\n return paymentAmount;\\n }\\n```\\n |
_validateCommitment fails for approved operators | medium | If a collateral token owner approves another user as an operator for all their tokens (rather than just for a given token), the validation check in `_validateCommitment()` will fail.\\nThe collateral token is implemented as an ERC721, which has two ways to approve another user:\\nApprove them to take actions with a given token (approve())\\nApprove them as an "operator" for all your owned tokens (setApprovalForAll())\\nHowever, when the `_validateCommitment()` function checks that the token is owned or approved by `msg.sender`, it does not accept those who are set as operators.\\n```\\nif (msg.sender != holder) {\\n require(msg.sender == operator, "invalid request");\\n}\\n```\\n | Include an additional check to confirm whether the `msg.sender` is approved as an operator on the token:\\n```\\n address holder = ERC721(COLLATERAL_TOKEN()).ownerOf(collateralId);\\n address approved = ERC721(COLLATERAL_TOKEN()).getApproved(collateralId);\\n address operator = ERC721(COLLATERAL_TOKEN()).isApprovedForAll(holder);\\n\\n if (msg.sender != holder) {\\n require(msg.sender == operator || msg.sender == approved, "invalid request");\\n }\\n```\\n | Approved operators of collateral tokens will be rejected from taking actions with those tokens. | ```\\nif (msg.sender != holder) {\\n require(msg.sender == operator, "invalid request");\\n}\\n```\\n |
timeToEpochEnd calculates backwards, breaking protocol math | medium | When a lien is liquidated, it calls `timeToEpochEnd()` to determine if a liquidation accountant should be deployed and we should adjust the protocol math to expect payment in a future epoch. Because of an error in the implementation, all liquidations that will pay out in the current epoch are set up as future epoch liquidations.\\nThe `liquidate()` function performs the following check to determine if it should set up the liquidation to be paid out in a future epoch:\\n```\\nif (PublicVault(owner).timeToEpochEnd() <= COLLATERAL_TOKEN.auctionWindow())\\n```\\n\\nThis check expects that `timeToEpochEnd()` will return the time until the epoch is over. However, the implementation gets this backwards:\\n```\\nfunction timeToEpochEnd() public view returns (uint256) {\\n uint256 epochEnd = START() + ((currentEpoch + 1) * EPOCH_LENGTH());\\n\\n if (epochEnd >= block.timestamp) {\\n return uint256(0);\\n }\\n\\n return block.timestamp - epochEnd;\\n}\\n```\\n\\nIf `epochEnd >= block.timestamp`, that means that there IS remaining time in the epoch, and it should perform the calculation to return `epochEnd - block.timestamp`. In the opposite case, where `epochEnd <= block.timestamp`, it should return zero.\\nThe result is that the function returns 0 for any epoch that isn't over. Since `0 < COLLATERAL_TOKEN.auctionWindow())`, all liquidated liens will trigger a liquidation accountant and the rest of the accounting for future epoch withdrawals. | Fix the `timeToEpochEnd()` function so it calculates the remaining time properly:\\n```\\nfunction timeToEpochEnd() public view returns (uint256) {\\n uint256 epochEnd = START() + ((currentEpoch + 1) * EPOCH_LENGTH());\\n\\n if (epochEnd <= block.timestamp) {\\n return uint256(0);\\n }\\n\\n return epochEnd - block.timestamp; //\\n}\\n```\\n | Accounting for a future epoch withdrawal causes a number of inconsistencies in the protocol's math, the impact of which vary depending on the situation. As a few examples:\\nIt calls `decreaseEpochLienCount()`. This has the effect of artificially lowering the number of liens in the epoch, which will cause the final liens paid off in the epoch to revert (and will let us process the epoch earlier than intended).\\nIt sets the payee of the lien to the liquidation accountant, which will pay out according to the withdrawal ratio (whereas all funds should be staying in the vault).\\nIt calls `increaseLiquidationsExpectedAtBoundary()`, which can throw off the math when processing the epoch. | ```\\nif (PublicVault(owner).timeToEpochEnd() <= COLLATERAL_TOKEN.auctionWindow())\\n```\\n |
_payment() function transfers full paymentAmount, overpaying first liens | medium | The `_payment()` function sends the full `paymentAmount` argument to the lien owner, which both (a) overpays lien owners if borrowers accidentally overpay and (b) sends the first lien owner all the funds for the entire loop of a borrower is intending to pay back multiple loans.\\nThere are two `makePayment()` functions in LienToken.sol. One that allows the user to specific a `position` (which specific lien they want to pay back, and another that iterates through their liens, paying each back.\\nIn both cases, the functions call out to `_payment()` with a `paymentAmount`, which is sent (in full) to the lien owner.\\n```\\nTRANSFER_PROXY.tokenTransferFrom(WETH, payer, payee, paymentAmount);\\n```\\n\\nThis behavior can cause problems in both cases.\\nThe first case is less severe: If the user is intending to pay off one lien, and they enter a `paymentAmount` greater than the amount owed, the function will send the full `paymentAmount` to the lien owner, rather than just sending the amount owed.\\nThe second case is much more severe: If the user is intending to pay towards all their loans, the `_makePayment()` function loops through open liens and performs the following:\\n```\\nuint256 paymentAmount = totalCapitalAvailable;\\nfor (uint256 i = 0; i < openLiens.length; ++i) {\\n uint256 capitalSpent = _payment(\\n collateralId,\\n uint8(i),\\n paymentAmount,\\n address(msg.sender)\\n );\\n paymentAmount -= capitalSpent;\\n}\\n```\\n\\nThe `_payment()` function is called with the first lien with `paymentAmount` set to the full amount sent to the function. The result is that this full amount is sent to the first lien holder, which could greatly exceed the amount they are owed. | Issue _payment() function transfers full paymentAmount, overpaying first liens\\nIn `_payment()`, if `lien.amount < paymentAmount`, set `paymentAmount = lien.amount`.\\nThe result will be that, in this case, only `lien.amount` is transferred to the lien owner, and this value is also returned from the function to accurately represent the amount that was paid. | A user who is intending to pay off all their loans will end up paying all the funds they offered, but only paying off their first lien, potentially losing a large amount of funds. | ```\\nTRANSFER_PROXY.tokenTransferFrom(WETH, payer, payee, paymentAmount);\\n```\\n |
_getInterest() function uses block.timestamp instead of the inputted timestamp | medium | The `_getInterest()` function takes a timestamp as input. However, in a crucial check in the function, it uses `block.timestamp` instead. The result is that other functions expecting accurate interest amounts will receive incorrect values.\\nThe `_getInterest()` function takes a lien and a timestamp as input. The intention is for it to calculate the amount of time that has passed in the lien (delta_t) and multiply this value by the rate and the amount to get the interest generated by this timestamp.\\nHowever, the function uses the following check regarding the timestamp:\\n```\\nif (block.timestamp >= lien.start + lien.duration) {\\n delta_t = uint256(lien.start + lien.duration - lien.last);\\n} \\n```\\n\\nBecause this check uses `block.timestamp` before returning the maximum interest payment, the function will incorrectly determine which path to take, and return an incorrect interest value. | Change `block.timestamp` to `timestamp` so that the if statement checks correctly. | There are two negative consequences that can come from this miscalculation:\\nif the function is called when the lien is over (block.timestamp >= lien.start + lien.duration) to check an interest amount from a timestamp during the lien, it will incorrectly return the maximum interest value\\nIf the function is called when the lien is active for a timestamp long after the lien is over, it will skip the check to return maximum value and return the value that would have been generated if interest kept accruing indefinitely (using delta_t = uint256(timestamp.safeCastTo32() - lien.last);)\\nThis `_getInterest()` function is used in many crucial protocol functions (_getOwed(), `calculateSlope()`, `changeInSlope()`, getTotalDebtForCollateralToken()), so these incorrect values can have surprising and unexpected negative impacts on the protocol. | ```\\nif (block.timestamp >= lien.start + lien.duration) {\\n delta_t = uint256(lien.start + lien.duration - lien.last);\\n} \\n```\\n |
Vault Fee uses incorrect offset leading to wildly incorrect value, allowing strategists to steal all funds | medium | `VAULT_FEE()` uses an incorrect offset, returning a number ~1e16X greater than intended, providing strategists with unlimited access to drain all vault funds.\\nWhen using ClonesWithImmutableArgs, offset values are set so that functions representing variables can retrieve the correct values from storage.\\nIn the ERC4626-Cloned.sol implementation, `VAULT_TYPE()` is given an offset of 172. However, the value before it is a `uint8` at the offset 164. Since a `uint8` takes only 1 byte of space, `VAULT_TYPE()` should have an offset of 165.\\nI put together a POC to grab the value of `VAULT_FEE()` in the test setup:\\n```\\nfunction testVaultFeeIncorrectlySet() public {\\n Dummy721 nft = new Dummy721();\\n address tokenContract = address(nft);\\n uint256 tokenId = uint256(1);\\n address publicVault = _createPublicVault({\\n strategist: strategistOne,\\n delegate: strategistTwo,\\n epochLength: 14 days\\n });\\n uint fee = PublicVault(publicVault).VAULT_FEE();\\n console.log(fee)\\n assert(fee == 5000); // 5000 is the value that was meant to be set\\n}\\n```\\n\\nIn this case, the value returned is > 3e20. | Set the offset for `VAULT_FEE()` to 165. I tested this value in the POC I created and it correctly returned the value of 5000. | This is a highly critical bug. `VAULT_FEE()` is used in `_handleStrategistInterestReward()` to determine the amount of tokens that should be allocated to `strategistUnclaimedShares`.\\n```\\nif (VAULT_FEE() != uint256(0)) {\\n uint256 interestOwing = LIEN_TOKEN().getInterest(lienId);\\n uint256 x = (amount > interestOwing) ? interestOwing : amount;\\n uint256 fee = x.mulDivDown(VAULT_FEE(), 1000); //VAULT_FEE is a basis point\\n strategistUnclaimedShares += convertToShares(fee);\\n }\\n```\\n\\nThe result is that strategistUnclaimedShares will be billions of times higher than the total interest generated, essentially giving strategist access to withdraw all funds from their vaults at any time. | ```\\nfunction testVaultFeeIncorrectlySet() public {\\n Dummy721 nft = new Dummy721();\\n address tokenContract = address(nft);\\n uint256 tokenId = uint256(1);\\n address publicVault = _createPublicVault({\\n strategist: strategistOne,\\n delegate: strategistTwo,\\n epochLength: 14 days\\n });\\n uint fee = PublicVault(publicVault).VAULT_FEE();\\n console.log(fee)\\n assert(fee == 5000); // 5000 is the value that was meant to be set\\n}\\n```\\n |
Bids cannot be created within timeBuffer of completion of a max duration auction | medium | The auction mechanism is intended to watch for bids within `timeBuffer` of the end of the auction, and automatically increase the remaining duration to `timeBuffer` if such a bid comes in.\\nThere is an error in the implementation that causes all bids within `timeBuffer` of the end of a max duration auction to revert, effectively ending the auction early and cutting off bidders who intended to wait until the end.\\nIn the `createBid()` function in AuctionHouse.sol, the function checks if a bid is within the final `timeBuffer` of the auction:\\n```\\nif (firstBidTime + duration - block.timestamp < timeBuffer)\\n```\\n\\nIf so, it sets `newDuration` to equal the amount that will extend the auction to `timeBuffer` from now:\\n```\\nuint64 newDuration = uint256( duration + (block.timestamp + timeBuffer - firstBidTime) ).safeCastTo64();\\n```\\n\\nIf this `newDuration` doesn't extend beyond the `maxDuration`, this works great. However, if it does extend beyond `maxDuration`, the following code is used to update duration:\\n```\\nauctions[tokenId].duration = auctions[tokenId].maxDuration - firstBidTime;\\n```\\n\\nThis code is incorrect. `maxDuration` will be a duration for the contest (currently set to 3 days), whereas `firstTimeBid` is a timestamp for the start of the auction (current timestamps are > 1 billion).\\nSubtracting `firstTimeBid` from `maxDuration` will underflow, which will revert the function. | Change this assignment to simply assign `duration` to `maxDuration`, as follows:\\n```\\nauctions[tokenId].duration = auctions[tokenId].maxDuration\\n```\\n | Bidders who expected to wait until the end of the auction to vote will be cut off from voting, as the auction will revert their bids.\\nVaults whose collateral is up for auction will earn less than they otherwise would have. | ```\\nif (firstBidTime + duration - block.timestamp < timeBuffer)\\n```\\n |
Loan can be written off by anybody before overdue delay expires | high | When a borrower takes a second loan after a loan that has been written off, this second loan can be written off instantly by any other member due to missing update of last repay block, leaving the staker at a loss.\\nA staker stakes and vouches a borrower\\nthe borrower borrows calling UToken:borrow: `accountBorrows[borrower].lastRepay` is updated with the current block number\\nthe staker writes off the entire debt of the borrower calling `UserManager:debtWriteOff`. In the internal call to `UToken:debtWriteOff` the principal is set to zero but `accountBorrows[borrower].lastRepay` is not updated\\n90 days pass and a staker vouches for the same borrower\\nthe borrower borrows calling UToken:borrow: `accountBorrows[borrower].lastRepay` is not set to the current block since non zero and stays to the previous value.\\n`accountBorrows[borrower].lastRepay` is now old enough to allow the check in `UserManager:debtWriteOff` at line 738 to pass. The debt is written off by any other member immediatly after the loan is given. The staker looses the staked amount immediatly.\\n```\\n if (block.number <= lastRepay + overdueBlocks + maxOverdueBlocks) {\\n if (staker != msg.sender) revert AuthFailed();\\n }\\n```\\n\\nThe last repay block is still stale and a new loan can be taken and written off immediatly many times as long as stakers are trusting the borrower\\nNote that this can be exploited maliciously by the borrower, who can continously ask for loans and then write them off immediatly. | Issue Loan can be written off by anybody before overdue delay expires\\nReset `lastRepay` for the borrower to 0 when the debt is written off completely\\n```\\n function debtWriteOff(address borrower, uint256 amount) external override whenNotPaused onlyUserManager {\\n uint256 oldPrincipal = getBorrowed(borrower);\\n uint256 repayAmount = amount > oldPrincipal ? oldPrincipal : amount;\\n\\n// Add the line below\\n if (oldPrincipal == repayAmount) accountBorrows[borrower].lastRepay = 0;\\n accountBorrows[borrower].principal = oldPrincipal - repayAmount;\\n totalBorrows -= repayAmount;\\n }\\n```\\n | The staker of the loan looses the staked amount well before the overdue delay is expired | ```\\n if (block.number <= lastRepay + overdueBlocks + maxOverdueBlocks) {\\n if (staker != msg.sender) revert AuthFailed();\\n }\\n```\\n |
A stake that has just been locked gets full reward multiplier | medium | A staker gets rewarded with full multiplier even if its stake has just been locked. Multiplier calculation should take into account the duration of the lock.\\nA staker stakes an amount of tokens.\\nThe staker waits for some time\\nThe staker has control of another member (bribe, ...)\\nThe staker vouches this other member\\nThe member borrows\\nThe staker calls `Comptroller:withdrawRewards` and gets an amount of rewards with a multiplier corresponding to a locked stake\\nThe member repays the loan\\nNote that steps 4 to 7 can be made in one tx, so no interest is paid at step 7.\\nThe result is that the staker can always get the full multiplier for rewards, without ever putting any funds at risk, nor any interest being paid. This is done at the expense of other honest stakers, who get proprotionally less of the rewards dripped into the comptroller.\\nFor a coded PoC replace the test `"staker with locked balance gets more rewards"` in `staking.ts` with the following\\n```\\n it("PoC: staker with locked balance gets more rewards even when just locked", async () => {\\n const trustAmount = parseUnits("2000");\\n const borrowAmount = parseUnits("1800");\\n const [account, staker, borrower] = members;\\n\\n const [accountStaked, borrowerStaked, stakerStaked] = await helpers.getStakedAmounts(\\n account,\\n staker,\\n borrower\\n );\\n\\n expect(accountStaked).eq(borrowerStaked);\\n expect(borrowerStaked).eq(stakerStaked);\\n\\n await helpers.updateTrust(staker, borrower, trustAmount);\\n \\n await roll(10);\\n await helpers.borrow(borrower, borrowAmount); // borrows just after withdrawing\\n \\n const [accountMultiplier, stakerMultiplier] = await helpers.getRewardsMultipliers(account, staker);\\n console.log("accountMultiplier: ", accountMultiplier);\\n console.log("StakerMultiplier: ", stakerMultiplier);\\n expect(accountMultiplier).lt(stakerMultiplier); // the multiplier is larger even if just locked\\n });\\n```\\n | Issue A stake that has just been locked gets full reward multiplier\\nShould introduce the accounting of the duration of a lock into the rewards calculation, so that full multiplier is given only to a lock that is as old as the stake itself. | A staker can get larger rewards designed for locked stakes by locking and unlocking in the same tx. | ```\\n it("PoC: staker with locked balance gets more rewards even when just locked", async () => {\\n const trustAmount = parseUnits("2000");\\n const borrowAmount = parseUnits("1800");\\n const [account, staker, borrower] = members;\\n\\n const [accountStaked, borrowerStaked, stakerStaked] = await helpers.getStakedAmounts(\\n account,\\n staker,\\n borrower\\n );\\n\\n expect(accountStaked).eq(borrowerStaked);\\n expect(borrowerStaked).eq(stakerStaked);\\n\\n await helpers.updateTrust(staker, borrower, trustAmount);\\n \\n await roll(10);\\n await helpers.borrow(borrower, borrowAmount); // borrows just after withdrawing\\n \\n const [accountMultiplier, stakerMultiplier] = await helpers.getRewardsMultipliers(account, staker);\\n console.log("accountMultiplier: ", accountMultiplier);\\n console.log("StakerMultiplier: ", stakerMultiplier);\\n expect(accountMultiplier).lt(stakerMultiplier); // the multiplier is larger even if just locked\\n });\\n```\\n |
updateTrust() vouchers also need check maxVouchers | medium | maxVouchers is to prevent the “vouchees“ array from getting too big and the loop will have the GAS explosion problem, but “vouchers“have the same problem, if you don't check the vouchers array, it is also possible that vouchers are big and cause updateLocked() to fail\\nvouchees check < maxVouchers ,but vouchers don't check\\n```\\n function updateTrust(address borrower, uint96 trustAmount) external onlyMember(msg.sender) whenNotPaused {\\n// rest of code\\n uint256 voucheesLength = vouchees[staker].length;\\n if (voucheesLength >= maxVouchers) revert MaxVouchees();\\n\\n\\n uint256 voucherIndex = vouchers[borrower].length;\\n voucherIndexes[borrower][staker] = Index(true, uint128(voucherIndex));\\n vouchers[borrower].push(Vouch(staker, trustAmount, 0, 0)); /**** don't check maxVouchers****/\\n```\\n | ```\\n function updateTrust(address borrower, uint96 trustAmount) external onlyMember(msg.sender) whenNotPaused {\\n// rest of code\\n uint256 voucheesLength = vouchees[staker].length;\\n if (voucheesLength >= maxVouchers) revert MaxVouchees();\\n\\n\\n uint256 voucherIndex = vouchers[borrower].length;\\n+ if (voucherIndex >= maxVouchers) revert MaxVouchees();\\n voucherIndexes[borrower][staker] = Index(true, uint128(voucherIndex));\\n vouchers[borrower].push(Vouch(staker, trustAmount, 0, 0)); \\n```\\n | it is also possible that vouchers are big and cause updateLocked() to fail | ```\\n function updateTrust(address borrower, uint96 trustAmount) external onlyMember(msg.sender) whenNotPaused {\\n// rest of code\\n uint256 voucheesLength = vouchees[staker].length;\\n if (voucheesLength >= maxVouchers) revert MaxVouchees();\\n\\n\\n uint256 voucherIndex = vouchers[borrower].length;\\n voucherIndexes[borrower][staker] = Index(true, uint128(voucherIndex));\\n vouchers[borrower].push(Vouch(staker, trustAmount, 0, 0)); /**** don't check maxVouchers****/\\n```\\n |
Unsafe downcasting arithmetic operation in UserManager related contract and in UToken.sol | medium | The value is unsafely downcasted and truncated from uint256 to uint96 or uint128 in UserManager related contract and in UToken.sol.\\nvalue can unsafely downcasted. let us look at it cast by cast.\\nIn UserManagerDAI.sol\\n```\\n function stakeWithPermit(\\n uint256 amount,\\n uint256 nonce,\\n uint256 expiry,\\n uint8 v,\\n bytes32 r,\\n bytes32 s\\n ) external whenNotPaused {\\n IDai erc20Token = IDai(stakingToken);\\n erc20Token.permit(msg.sender, address(this), nonce, expiry, true, v, r, s);\\n\\n stake(uint96(amount));\\n }\\n```\\n\\nas we can see, the user's staking amount is downcasted from uint256 to uint96.\\nthe same issue exists in UserManagerERC20.sol\\nIn the context of UToken.sol, a bigger issue comes.\\nUser invokes the borrow function in UToken.sol\\n```\\n function borrow(address to, uint256 amount) external override onlyMember(msg.sender) whenNotPaused nonReentrant {\\n```\\n\\nand\\n```\\n // Withdraw the borrowed amount of tokens from the assetManager and send them to the borrower\\n if (!assetManagerContract.withdraw(underlying, to, amount)) revert WithdrawFailed();\\n\\n // Call update locked on the userManager to lock this borrowers stakers. This function\\n // will revert if the account does not have enough vouchers to cover the borrow amount. ie\\n // the borrower is trying to borrow more than is able to be underwritten\\n IUserManager(userManager).updateLocked(msg.sender, uint96(amount + fee), true);\\n```\\n\\nnote when we withdraw fund from asset Manager, we use a uint256 amount, but we downcast it to uint96(amount + fee) when updating the locked. The accounting would be so broken if the amount + fee is a larger than uint96 number.\\nSame issue in the function UToken.sol# _repayBorrowFresh\\n```\\n function _repayBorrowFresh(\\n address payer,\\n address borrower,\\n uint256 amount\\n ) internal {\\n```\\n\\nand\\n```\\n // Update the account borrows to reflect the repayment\\n accountBorrows[borrower].principal = borrowedAmount - repayAmount;\\n accountBorrows[borrower].interest = 0;\\n```\\n\\nand\\n```\\n IUserManager(userManager).updateLocked(borrower, uint96(repayAmount - interest), false);\\n```\\n\\nwe use a uint256 number for borrowedAmount - repayAmount, but downcast it to uint96(repayAmount - interest) when updating the lock!\\nNote there are index-related downcasting, the damage is small , comparing the accounting related downcasting.because it is difference to have uint128 amount of vouch, but I still want to mention it: the index is unsafely downcasted from uint256 to uint128\\n```\\n // Get the new index that this vouch is going to be inserted at\\n // Then update the voucher indexes for this borrower as well as\\n // Adding the Vouch the the vouchers array for this staker\\n uint256 voucherIndex = vouchers[borrower].length;\\n voucherIndexes[borrower][staker] = Index(true, uint128(voucherIndex));\\n vouchers[borrower].push(Vouch(staker, trustAmount, 0, 0));\\n\\n // Add the voucherIndex of this new vouch to the vouchees array for this\\n // staker then update the voucheeIndexes with the voucheeIndex\\n uint256 voucheeIndex = voucheesLength;\\n vouchees[staker].push(Vouchee(borrower, uint96(voucherIndex)));\\n voucheeIndexes[borrower][staker] = Index(true, uint128(voucheeIndex));\\n```\\n\\nThere are block.number related downcasting, which is a smaller issue.\\n```\\nvouch.lastUpdated = uint64(block.number);\\n```\\n | Just use uint256, or use openzepplin safeCasting. | The damage level from the number truncation is rated by:\\nUToken borrow and repaying downcasting > staking amount downcating truncation > the vouch index related downcasting. > block.number casting. | ```\\n function stakeWithPermit(\\n uint256 amount,\\n uint256 nonce,\\n uint256 expiry,\\n uint8 v,\\n bytes32 r,\\n bytes32 s\\n ) external whenNotPaused {\\n IDai erc20Token = IDai(stakingToken);\\n erc20Token.permit(msg.sender, address(this), nonce, expiry, true, v, r, s);\\n\\n stake(uint96(amount));\\n }\\n```\\n |
getUserInfo() returns incorrect values for locked and stakedAmount | medium | The `getUserInfo()` function mixes up the values for `locked` and `stakedAmount`, so the value for each of these is returned for the other.\\nIn UnionLens.sol, the `getUserInfo()` function is used to retrieve information about a given user.\\nIn order to pull the user's staking information, the following function is called:\\n```\\n(bool isMember, uint96 locked, uint96 stakedAmount) = userManager.stakers(user);\\n```\\n\\nThis function is intended to return these three values from the UserManager.sol contract. However, in that contract, the function being called returns a Staker struct, which has the following values:\\n```\\nstruct Staker {\\n bool isMember;\\n uint96 stakedAmount;\\n uint96 locked;\\n}\\n```\\n\\nBecause both `locked` and `stakedAmount` have the type `uint96`, the function does not revert, and simply returns the incorrect values to the caller. | Reverse the order of return values in the `getUserInfo()` function, so that it reads:\\n```\\n(bool isMember, uint96 stakedAmount, uint96 locked) = userManager.stakers(user);\\n```\\n | Any user or front end calling the `getUserInfo()` function will be given incorrect values, which could lead to wrong decisions. | ```\\n(bool isMember, uint96 locked, uint96 stakedAmount) = userManager.stakers(user);\\n```\\n |
`AssetManager.rebalance()` will revert when the balance of `tokenAddress` in the money market is 0. | medium | `AssetManager.rebalance()` will revert when the balance of `tokenAddress` in the money market is 0.\\nAssetManager.rebalance() tries to withdraw tokens from each money market for rebalancing here.\\n```\\n // Loop through each money market and withdraw all the tokens\\n for (uint256 i = 0; i < moneyMarketsLength; i++) {\\n IMoneyMarketAdapter moneyMarket = moneyMarkets[i];\\n if (!moneyMarket.supportsToken(tokenAddress)) continue;\\n moneyMarket.withdrawAll(tokenAddress, address(this));\\n\\n supportedMoneyMarkets[supportedMoneyMarketsSize] = moneyMarket;\\n supportedMoneyMarketsSize++;\\n }\\n```\\n\\nWhen the balance of the `tokenAddress` is 0, we don't need to call `moneyMarket.withdrawAll()` but it still tries to call.\\nBut this will revert because Aave V3 doesn't allow to withdraw 0 amount here.\\n```\\n function validateWithdraw(\\n DataTypes.ReserveCache memory reserveCache,\\n uint256 amount,\\n uint256 userBalance\\n ) internal pure {\\n require(amount != 0, Errors.INVALID_AMOUNT);\\n```\\n\\nSo `AssetManager.rebalance()` will revert if one money market has zero balance of `tokenAddress`. | Issue `AssetManager.rebalance()` will revert when the balance of `tokenAddress` in the money market is 0.\\nI think we can modify AaveV3Adapter.withdrawAll() to work only when the balance is positive.\\n```\\n function withdrawAll(address tokenAddress, address recipient)\\n external\\n override\\n onlyAssetManager\\n checkTokenSupported(tokenAddress)\\n {\\n address aTokenAddress = tokenToAToken[tokenAddress];\\n IERC20Upgradeable aToken = IERC20Upgradeable(aTokenAddress);\\n uint256 balance = aToken.balanceOf(address(this));\\n\\n if (balance > 0) {\\n lendingPool.withdraw(tokenAddress, type(uint256).max, recipient);\\n }\\n }\\n```\\n | The money markets can't be rebalanced if there is no balance in at least one market. | ```\\n // Loop through each money market and withdraw all the tokens\\n for (uint256 i = 0; i < moneyMarketsLength; i++) {\\n IMoneyMarketAdapter moneyMarket = moneyMarkets[i];\\n if (!moneyMarket.supportsToken(tokenAddress)) continue;\\n moneyMarket.withdrawAll(tokenAddress, address(this));\\n\\n supportedMoneyMarkets[supportedMoneyMarketsSize] = moneyMarket;\\n supportedMoneyMarketsSize++;\\n }\\n```\\n |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.