diff --git a/Audit_Report.pdf b/Audit_Report.pdf new file mode 100644 index 0000000..e6916d8 Binary files /dev/null and b/Audit_Report.pdf differ diff --git a/README.md b/README.md index 052b869..5fecbca 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,7 @@ Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/66 ## Found by 0xbranded + ## Summary Borrowing and funding fees of both longs/shorts suffer from two distinct sources of precision loss. The level of precision loss is large enough to consistently occur at a significant level, and can even result in total omission of fee payment for periods of time. This error is especially disruptive given the sensitive nature of funding fee calculations both in determining liquidations (a core functionality), as well as payments received by LPs and funding recipients (representing a significant loss). @@ -976,20 +977,18 @@ Each added digit of precision decreases the precision loss by an order of magnit Consult `Denom.vy` for further guidance on necessary adjustments to make to the various functions to account for these updated values. - - ## Discussion **KupiaSecAdmin** -Escalate - +Escalate + This is a system design choice. It just charges less fees, there is no loss of funds happening to the protocol. Also this can be modified by updating parameters. **sherlock-admin3** -> Escalate -> +> Escalate +> > This is a system design choice. It just charges less fees, there is no loss of funds happening to the protocol. Also this can be modified by updating parameters. You've created a valid escalation! @@ -1000,47 +999,47 @@ You may delete or edit your escalation comment anytime before the 48-hour escala **msheikhattari** -It's not a design choice, this is clearly a precision loss issue. There are real loss of funds and delayed liquidation that consistently occur given the real world parameters such as blocktime. these are clearly documented in the PoC with specific errors exceeding 10% consistently, and sometimes even total loss of fee payment. - -DENOM is a constant that cannot be updated. It must be corrected by increasing the decimals of precision. - +It's not a design choice, this is clearly a precision loss issue. There are real loss of funds and delayed liquidation that consistently occur given the real world parameters such as blocktime. these are clearly documented in the PoC with specific errors exceeding 10\% consistently, and sometimes even total loss of fee payment. + +DENOM is a constant that cannot be updated. It must be corrected by increasing the decimals of precision. + Given the high likelihood and high impact of loss of funds not only from fee miscalculation but delayed/avoided liquidations, this is a high severity issue. **KupiaSecAdmin** -@msheikhattari - The logic only modifies the fee ratio by rounding it down, if the ratio doesn't meet what the protocol team is looking for, they can simply increase the numerator by updating parameters. -Also even though with lower fee ratio, why does it cause loss? If borrow fee becomes less, there will be more positions openers who will also increase utilization ratio which also will increase the borrowing fee. +@msheikhattari - The logic only modifies the fee ratio by rounding it down, if the ratio doesn't meet what the protocol team is looking for, they can simply increase the numerator by updating parameters. +Also even though with lower fee ratio, why does it cause loss? If borrow fee becomes less, there will be more positions openers who will also increase utilization ratio which also will increase the borrowing fee. Still, this is low/info at most. **msheikhattari** -No parameters can be updated to fix this issue - DENOM must directly be changed (which is a constant). It currently supports annual fees of about 1.5% increments which is way too much precision loss with errors consistently exceeding 10% in all fee calculations. - -The numerator is blocks * rate, the point is that the rate component cannot support fees with a precision below 1.5% because of the DENOM parameter that is too small. - +No parameters can be updated to fix this issue - DENOM must directly be changed (which is a constant). It currently supports annual fees of about 1.5\% increments which is way too much precision loss with errors consistently exceeding 10\% in all fee calculations. + +The numerator is blocks * rate, the point is that the rate component cannot support fees with a precision below 1.5\% because of the DENOM parameter that is too small. + I included a PoC which clearly demonstrates this currently unavoidable precision loss. **rickkk137** -> No parameters can be updated to fix this issue - DENOM must directly be changed (which is a constant). It currently supports annual fees of about 1.5% increments which is way too much precision loss with errors consistently exceeding 10% in all fee calculations. -> -> The numerator is blocks * rate, the point is that the rate component cannot support fees with a precision below 1.5% because of the DENOM parameter that is too small. -> -> I included a PoC which clearly demonstrates this currently unavoidable precision loss. - -I read your PoC,root cause in this report is here which mentioned in this report and borrowing_paid calculation doesn't have effect -```vyper -def utilization(reserves: uint256, interest: uint256) -> uint256: - return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100)) -``` - - +> No parameters can be updated to fix this issue - DENOM must directly be changed (which is a constant). It currently supports annual fees of about 1.5\% increments which is way too much precision loss with errors consistently exceeding 10\% in all fee calculations. +> +> The numerator is blocks * rate, the point is that the rate component cannot support fees with a precision below 1.5\% because of the DENOM parameter that is too small. +> +> I included a PoC which clearly demonstrates this currently unavoidable precision loss. + +I read your PoC,root cause in this report is here which mentioned in this report and borrowing\_paid calculation doesn't have effect +```vyper +def utilization(reserves: uint256, interest: uint256) -> uint256: + return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100)) +``` + + **WangSecurity** -Though, this may be considered a design decision, the calculation of fees still has precision loss which would pay less fees due to rounding down. Hence, this should've been included in the known issues question in the protocol's README, but it wasn't. Also, `DENOM` couldn't be changed after the contracts were deployed, but it wasn't flagged as a hardcoded variable that could be changed. - +Though, this may be considered a design decision, the calculation of fees still has precision loss which would pay less fees due to rounding down. Hence, this should've been included in the known issues question in the protocol's README, but it wasn't. Also, `DENOM` couldn't be changed after the contracts were deployed, but it wasn't flagged as a hardcoded variable that could be changed. + Hence, this should remain a valid issue. Planning to reject the escalation. **msheikhattari** @@ -1053,7 +1052,7 @@ Agree with the above, as I understand, there are no extensive limitations, and t **rickkk137** -root cause in this issue and #72 is same +root cause in this issue and \#72 is same **KupiaSecAdmin** @@ -1061,18 +1060,18 @@ root cause in this issue and #72 is same **rickkk137** -@KupiaSecAdmin I agree with u ,this issue talks about two part -first one: -``` -def utilization(reserves: uint256, interest: uint256) -> uint256: - return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100)) -``` -second one: -``` -def apply(x: uint256, numerator: uint256) -> Fee: - fee : uint256 = (x * numerator) / DENOM - ``` - first one has precision loss but [second one doesn't have](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/126#issuecomment-2365007747) +@KupiaSecAdmin I agree with u ,this issue talks about two part +first one: +``` +def utilization(reserves: uint256, interest: uint256) -> uint256: + return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100)) +``` +second one: +``` +def apply(x: uint256, numerator: uint256) -> Fee: + fee : uint256 = (x * numerator) / DENOM + ``` + first one has precision loss but [second one doesn't have](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/126\#issuecomment-2365007747) **KupiaSecAdmin** @@ -1080,7 +1079,7 @@ def apply(x: uint256, numerator: uint256) -> Fee: **rickkk137** -@KupiaSecAdmin no,I don't think so, imo #87 is not possible i wrote my comment there +@KupiaSecAdmin no,I don't think so, imo \#87 is not possible i wrote my comment there **msheikhattari** @@ -1089,35 +1088,35 @@ Please refer to the PoC, the issue is currently unmitigatable due to the precisi **WangSecurity** -I believe @rickkk137 is correct that this and #72 have the same root cause and explain the same problem. The escalation will still be rejected, and #72 + #60 (duplicate) will be duplicated with this report because it goes more in-depth in explaining the issue and shows a higher severity. +I believe @rickkk137 is correct that this and \#72 have the same root cause and explain the same problem. The escalation will still be rejected, and \#72 + \#60 (duplicate) will be duplicated with this report because it goes more in-depth in explaining the issue and shows a higher severity. **msheikhattari** -#72 is describing a different issue. It doesn't mention the DENOM parameter at all and should not be duped with this one. - -That issue is talking about setting min values for `long_utilization` and `short_utilization`, the only similarity is that they are both sources of precision loss. Otherwise the underlying issue is different. +\#72 is describing a different issue. It doesn't mention the DENOM parameter at all and should not be duped with this one. + +That issue is talking about setting min values for `long\_utilization` and `short\_utilization`, the only similarity is that they are both sources of precision loss. Otherwise the underlying issue is different. **WangSecurity** Yes, excuse me, focused too much on the utilisation part. -The precision loss from Denom is relatively low based on the discussion under #126. But here the report combines 2 precision loss factors resulting in a major precision loss. Hence, I'm returning to my previous decision to reject the escalation, increase severity to high and leave it solo, because it combines 2 precision loss factors resulting in a more major issue than #72 which talks only about utilisation. Planning to apply this decision in a couple of hours. +The precision loss from Denom is relatively low based on the discussion under \#126. But here the report combines 2 precision loss factors resulting in a major precision loss. Hence, I'm returning to my previous decision to reject the escalation, increase severity to high and leave it solo, because it combines 2 precision loss factors resulting in a more major issue than \#72 which talks only about utilisation. Planning to apply this decision in a couple of hours. **aslanbekaibimov** -@WangSecurity - -> The duplication rules assume we have a "target issue", and the "potential duplicate" of that issue needs to meet the following requirements to be considered a duplicate. - -> Identify the root cause - Identify at least a Medium impact - Identify a valid attack path or vulnerability path - Fulfills other submission quality requirements (e.g. provides a PoC for categories that require one) - -Don't #72 and #60 satisfy all 4 requirements? +@WangSecurity + +> The duplication rules assume we have a "target issue", and the "potential duplicate" of that issue needs to meet the following requirements to be considered a duplicate. + +> Identify the root cause + Identify at least a Medium impact + Identify a valid attack path or vulnerability path + Fulfills other submission quality requirements (e.g. provides a PoC for categories that require one) + +Don't \#72 and \#60 satisfy all 4 requirements? **WangSecurity** -Good question, but #72 and #60 identified only one source of precision loss, so the following should apply: +Good question, but \#72 and \#60 identified only one source of precision loss, so the following should apply: > The exception to this would be if underlying code implementations OR impact OR the fixes are different, then they may be treated separately. @@ -1125,48 +1124,48 @@ That's the reason I think they should be treated separately. The decision remain **rickkk137** ->But here the report combines 2 precision loss factors resulting in a major precision loss -``` -def utilization(reserves: uint256, interest: uint256) -> uint256: - """ - Reserve utilization in percent (rounded down). @audit this is actually rounded up... - """ - @>>> return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100)) - -@external -@pure -def utilization2(reserves: uint256, interest: uint256) -> uint256: - """ - Reserve utilization in percent (rounded down). @audit this is actually rounded up... - """ - @>>> return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100_000)) - - - -function testCombined() public { - // now let's see what would happen if we raised the precision of both fees and percents - uint max_fee = 65; - uint max_fee2 = 65_000; // 3 extra digits of precision lowers error by 3 orders of magnitude - - uint256 reserves = 10_000_000 ether; - uint256 interest = 199_999 ether; // interest & reserves same in both, only differ in precision. - - uint256 util1 = denom.utilization(reserves, interest); - @>>> uint256 util2 = denom.utilization2(reserves, interest); // 3 extra digits of precision here also - - // borrow rate - uint fee1 = denom.scale(max_fee, util1); - @>>> uint fee2 = denom.scale2(max_fee2, util2); - - assertEq(fee1 * 1_000, fee2 - 999); // fee 1 is 1.000, fee 2 is 1.999 (~50% error) - } -``` -the watson passed util2 to denom.scale2 function in his PoC and that made huge difference and utilization2 function is custom function has written by watson - - - - - +>But here the report combines 2 precision loss factors resulting in a major precision loss +``` +def utilization(reserves: uint256, interest: uint256) -> uint256: + """ + Reserve utilization in percent (rounded down). @audit this is actually rounded up... + """ + @>>> return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100)) + +@external +@pure +def utilization2(reserves: uint256, interest: uint256) -> uint256: + """ + Reserve utilization in percent (rounded down). @audit this is actually rounded up... + """ + @>>> return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100\_000)) + + + +function testCombined() public { + // now let's see what would happen if we raised the precision of both fees and percents + uint max\_fee = 65; + uint max\_fee2 = 65\_000; // 3 extra digits of precision lowers error by 3 orders of magnitude + + uint256 reserves = 10\_000\_000 ether; + uint256 interest = 199\_999 ether; // interest & reserves same in both, only differ in precision. + + uint256 util1 = denom.utilization(reserves, interest); + @>>> uint256 util2 = denom.utilization2(reserves, interest); // 3 extra digits of precision here also + + // borrow rate + uint fee1 = denom.scale(max\_fee, util1); + @>>> uint fee2 = denom.scale2(max\_fee2, util2); + + assertEq(fee1 * 1\_000, fee2 - 999); // fee 1 is 1.000, fee 2 is 1.999 (~50\% error) + } +``` +the watson passed util2 to denom.scale2 function in his PoC and that made huge difference and utilization2 function is custom function has written by watson + + + + + **msheikhattari** @@ -1189,27 +1188,27 @@ When combined, not only is the precision loss more severe but also more likely t **WangSecurity** -But as I see in #126, the precision loss from Denom is not that severe, is it wrong? +But as I see in \#126, the precision loss from Denom is not that severe, is it wrong? **msheikhattari** -That issue is slightly different, but what was pointed out here is that the most granular annual fee representable is about 1.6% - these are the intervals for fee rates as well (ex. 2x 1.6, 3x 1.6...) +That issue is slightly different, but what was pointed out here is that the most granular annual fee representable is about 1.6\% - these are the intervals for fee rates as well (ex. 2x 1.6, 3x 1.6...) -Utilization on the other hand experiences precision loss of 1% in the extreme case (ex 14.9% -> 14%) +Utilization on the other hand experiences precision loss of 1\% in the extreme case (ex 14.9\% -> 14\%) So in absolute terms the issue arising from DENOM is more significant, when combined these issues become far more significant than implied by their nominal values, not only due to multiplied loss of precision but increased likelihood of loss (if precision loss from one source bumps it just over the boundary of another, as outlined in the PoC) **WangSecurity** -Yeah, I see. #126 tries to show a scenario where DENOM precision loss would round down the fees to 0, and for that to happen, the fees or collateral have to be very small, which results in a very small loss. But this issue just shows the precision loss from DENOM and doesn't try to show rounding down to 0. That's the key difference between the two reports. - -Hence, my decision remains that this will remain solo with high severity as expressed above. Planning to reject the escalation. The decision will be applied tomorrow at 10 am UTC: -> *Note: #126 won't be duplicated with this report as it doesn't show Medium impact* +Yeah, I see. \#126 tries to show a scenario where DENOM precision loss would round down the fees to 0, and for that to happen, the fees or collateral have to be very small, which results in a very small loss. But this issue just shows the precision loss from DENOM and doesn't try to show rounding down to 0. That's the key difference between the two reports. + +Hence, my decision remains that this will remain solo with high severity as expressed above. Planning to reject the escalation. The decision will be applied tomorrow at 10 am UTC: +> *Note: \#126 won't be duplicated with this report as it doesn't show Medium impact* **WangSecurity** -Result: -High +Result: +High Unique **sherlock-admin2** @@ -1217,7 +1216,15 @@ Unique Escalations have been resolved successfully! Escalation status: -- [KupiaSecAdmin](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/66/#issuecomment-2345173630): rejected +- [KupiaSecAdmin](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/66/\#issuecomment-2345173630): rejected + +**sherlock-admin2** + +The protocol team fixed this issue in the following PRs/commits: +https://github.com/Velar-co/gl-sherlock/pull/3 + + + # Issue H-2: User can sandwich their own position close to get back all of their position fees @@ -1225,6 +1232,7 @@ Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/94 ## Found by 0x37, KupiaSec, bughuntoor + ## Summary User can sandwich their own position close to get back all of their position fees @@ -1251,20 +1259,18 @@ Manual Review ## Recommendation Implement a system where fees are gradually distributed to LP providers. - - ## Discussion **KupiaSecAdmin** -Escalate - +Escalate + This is invalid, `FEES.update` function is called after every action, so when the liquidity of flashloan is added, the accrued fees are distributed to prev LP providers. **sherlock-admin3** -> Escalate -> +> Escalate +> > This is invalid, `FEES.update` function is called after every action, so when the liquidity of flashloan is added, the accrued fees are distributed to prev LP providers. You've created a valid escalation! @@ -1275,51 +1281,51 @@ You may delete or edit your escalation comment anytime before the 48-hour escala **spacegliderrrr** -Comment above is simply incorrect. `Fees.update` doesn't actually make a change on the LP token value - it simply calculates the new fee terms and makes a snapshot at current block. - +Comment above is simply incorrect. `Fees.update` doesn't actually make a change on the LP token value - it simply calculates the new fee terms and makes a snapshot at current block. + The value of LP token is based on the pool reserves. Fees are actually distributed to the pool only upon closing the position, hence why the attack described is possible. **rickkk137** -invalid -it is intended design -```python -def f(mv: uint256, pv: uint256, ts: uint256) -> uint256: - if ts == 0: return mv - else : return (mv * ts) / pv -``` -```python -def g(lp: uint256, ts: uint256, pv: uint256) -> uint256: - return (lp * pv) / ts -``` -Pool contract uses above formola to compute amount of LP token to mint and also burn value for example: -pool VEL-STX: -VEL reserve = 1000 -STX reserve = 1000 -VEL/STX price = $1 -LP total_supply = 1000 -**pv stand for pool value and ts stand for total_supply and mv stand for mint value** -LP_price = ts / pv = 1000 / 1000 = $1 -its mean if user deposit $1000 into pool in result get 1000 LP token -and for burn -burn_value = lp_amount * pool_value / total_supply and based on above example -total_supply=2000 -pool_value=2000 -because user deposit $1000 into pool and mint 1000lp token - -the issue want to say mailicious user with increase lp price can retrieve borrowing_fee ,but when mailicious users and other users closes their profitable positions pool_value will be decreased[the protocol pay users' profit from pool reserve],hence burn_value become less than usual state - -requirment internal state for attack path: - -- mailicious user's position be in loss[pool value will be decreased because user collateral will be added to pool reserve] -- other user don't close their profitable positions in that block -- flashloan's cost be low - +invalid +it is intended design +```python +def f(mv: uint256, pv: uint256, ts: uint256) -> uint256: + if ts == 0: return mv + else : return (mv * ts) / pv +``` +```python +def g(lp: uint256, ts: uint256, pv: uint256) -> uint256: + return (lp * pv) / ts +``` +Pool contract uses above formola to compute amount of LP token to mint and also burn value for example: +pool VEL-STX: +VEL reserve = 1000 +STX reserve = 1000 +VEL/STX price = \$1 +LP total\_supply = 1000 +**pv stand for pool value and ts stand for total\_supply and mv stand for mint value** +LP\_price = ts / pv = 1000 / 1000 = \$1 +its mean if user deposit \$1000 into pool in result get 1000 LP token +and for burn +burn\_value = lp\_amount * pool\_value / total\_supply and based on above example +total\_supply=2000 +pool\_value=2000 +because user deposit \$1000 into pool and mint 1000lp token + +the issue want to say mailicious user with increase lp price can retrieve borrowing\_fee ,but when mailicious users and other users closes their profitable positions pool\_value will be decreased[the protocol pay users' profit from pool reserve],hence burn\_value become less than usual state + +requirment internal state for attack path: + +- mailicious user's position be in loss[pool value will be decreased because user collateral will be added to pool reserve] +- other user don't close their profitable positions in that block +- flashloan's cost be low + **WangSecurity** -I see how it can be viewed as the intended design, but still, the user can bypass the fees essentially and LPs lose them, when they should've been to receive it. Am I missing anything here? +I see how it can be viewed as the intended design, but still, the user can bypass the fees essentially and LPs lose them, when they should've been to receive it. Am I missing anything here? **rickkk137** @@ -1332,12 +1338,12 @@ attack path is possible when user's position is in lose and loss amount has to b **WangSecurity** -> attack path is possible when user's position is in lose and loss amount has to be enough to increase the LP price , note that if loss amount be large enough the position can be eligible for liquidation and the user has to close his position before liquidation bot - -So these are the constraints: -1. Loss should be relatively large so it increases the LP price. -2. The attacker has to perform the attack before the liquidation bot liquidates their position which is also complicated by the private mempool on Bob, so we cannot see the transaction of the liquidation bot. - +> attack path is possible when user's position is in lose and loss amount has to be enough to increase the LP price , note that if loss amount be large enough the position can be eligible for liquidation and the user has to close his position before liquidation bot + +So these are the constraints: +1. Loss should be relatively large so it increases the LP price. +2. The attacker has to perform the attack before the liquidation bot liquidates their position which is also complicated by the private mempool on Bob, so we cannot see the transaction of the liquidation bot. + Are the above points correct and is there anything missing? **deadrosesxyz** @@ -1346,14 +1352,14 @@ No, loss amount does not need to be large. Attack can be performed on any non-pr **WangSecurity** -In that case, I agree that it's an issue that the user can indeed bypass the fees and prevent the LPs from receiving it. Also, I don't see the pre-condition of the position being non-profitable as an extensive limitation. Moreover, since it can be any non-profitable position, then there is also no requirement for executing the attack before the liquidation bot (unless the position can be liquidated as soon as it's non-profitable). - +In that case, I agree that it's an issue that the user can indeed bypass the fees and prevent the LPs from receiving it. Also, I don't see the pre-condition of the position being non-profitable as an extensive limitation. Moreover, since it can be any non-profitable position, then there is also no requirement for executing the attack before the liquidation bot (unless the position can be liquidated as soon as it's non-profitable). + Thus, I see that it should be high severity. If something is missing or these limitations are extensive in the context of the protocol, please let me know. Planning to reject the escalation, but increase severity to high. **WangSecurity** -Result: -High +Result: +High Has duplicates **sherlock-admin4** @@ -1361,14 +1367,25 @@ Has duplicates Escalations have been resolved successfully! Escalation status: -- [KupiaSecAdmin](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/94/#issuecomment-2344292890): rejected +- [KupiaSecAdmin](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/94/\#issuecomment-2344292890): rejected + +**sherlock-admin2** + +The protocol team fixed this issue in the following PRs/commits: +https://github.com/Velar-co/gl-sherlock/pull/1 + + + # Issue M-1: LPs will withdraw more value than deposited during pegged token de-peg events Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/52 +The protocol has acknowledged this issue. + ## Found by 4gontuk, KupiaSec + ### Summary The [`CONTEXT` function in `gl-sherlock/contracts/api.vy`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/api.vy#L52-L71) uses the `/USD` price for valuation, assuming a 1:1 peg between the quote token and USD. This assumption can fail during de-peg events, leading to incorrect valuations and potential exploitation. @@ -1596,20 +1613,18 @@ def CONTEXT( ### Mitigation To mitigate this issue, the protocol should use the `/` price directly if available, or derive it from the `/USD` and `/USD` prices. This ensures accurate valuations even if the quote token de-pegs from USD. - - ## Discussion **mePopye** -Escalate - +Escalate + On behalf of the watson **sherlock-admin3** -> Escalate -> +> Escalate +> > On behalf of the watson You've created a valid escalation! @@ -1620,21 +1635,21 @@ You may delete or edit your escalation comment anytime before the 48-hour escala **WangSecurity** -After additionally considering this issue, here's my understanding. Let's assume a scenario of 30% depeg and USDT = 0.7 USD. -1. The pool has 10 BTC and 500k USDT. -2. User deposits 1 BTC and 50k USDT, assuming 1 BTC = 50k USDT = 50k USD. -3. USDT depegs to 0.7 USD, i.e. 1 USDT = 0.7 USD. Then BTC = 50k USD = ~71.5k USDT. -4. The user withdraws 1 BTC and 50k USDT. They receive it because the code still considers 1 USDT = 1 USD. There are 10 BTC and 500k USDT left in the contracts. -5. The protocol didn't lose any funds, the amount of BTC and USDT remained the same as it was before the depeg. -6. But, in reality, the user has withdrawn 50k worth of BTC and 35k worth of USDT since 1 USDT = 0.7 USD. -7. Hence, if the protocol accounted for the depeg, there had to be 10 BTC and 515k USDT left in the contract after the user had withdrawn during the depeg. - -Hence, even though it's not a direct loss of funds but a loss in value, this should be a valid medium (considering depeg as an extensive limitation). Thus, planning to accept the escalation and validate with medium severity. The duplicate is #113, are there any additional duplicates? +After additionally considering this issue, here's my understanding. Let's assume a scenario of 30\% depeg and USDT = 0.7 USD. +1. The pool has 10 BTC and 500k USDT. +2. User deposits 1 BTC and 50k USDT, assuming 1 BTC = 50k USDT = 50k USD. +3. USDT depegs to 0.7 USD, i.e. 1 USDT = 0.7 USD. Then BTC = 50k USD = ~71.5k USDT. +4. The user withdraws 1 BTC and 50k USDT. They receive it because the code still considers 1 USDT = 1 USD. There are 10 BTC and 500k USDT left in the contracts. +5. The protocol didn't lose any funds, the amount of BTC and USDT remained the same as it was before the depeg. +6. But, in reality, the user has withdrawn 50k worth of BTC and 35k worth of USDT since 1 USDT = 0.7 USD. +7. Hence, if the protocol accounted for the depeg, there had to be 10 BTC and 515k USDT left in the contract after the user had withdrawn during the depeg. + +Hence, even though it's not a direct loss of funds but a loss in value, this should be a valid medium (considering depeg as an extensive limitation). Thus, planning to accept the escalation and validate with medium severity. The duplicate is \#113, are there any additional duplicates? **WangSecurity** -Result: -Medium +Result: +Medium Has duplicates **sherlock-admin2** @@ -1642,14 +1657,19 @@ Has duplicates Escalations have been resolved successfully! Escalation status: -- [mePopye](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/52/#issuecomment-2345299625): accepted +- [mePopye](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/52/\#issuecomment-2345299625): accepted + + # Issue M-2: Funding fee will be zero because of precision loss Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/72 +The protocol has acknowledged this issue. + ## Found by aslanbek, pashap9990 + ### Summary funding fee in some cases will be zero because of precision loss @@ -1751,47 +1771,45 @@ def test_precision_loss(setup, open,VEL, STX, long, positions, pools): 1-scale up long_utilzation and short_utilzation 2-set min value for long_utilzation and short_utilzation - - ## Discussion **sherlock-admin3** -> Escalate -> -> Let’s assume -> interest = 1_099_999e18 -> reserve=10_000_000e18 -> max_fee = 100 -> long_utilization = interest / reserve / 100 = 1_099_999e18 / 10_000_000e18 / 100 = 10.99 ~ 10 //round down -> borrowing_fee = max_fee * long_utillization / 100 = 100 * 10.99 / 100 = 10.99 -> after one year -> -> -> //result without precision loss -> block_per_year = 15_778_800 -> funding_fee_sum = block_per_year * funding_fee = 15778800 * 10.99 = 173,409,012 -> borrowing_long_sum = block_per_year * borrowing_fee = 15778800 * 10.99 = 173,409,012 -> -> borrowing_paid = collateral * borrowing_long_sum / DENOM = 1_099_999e18 * 173,409,012 / 1e9 = 190,749e18 -> -> funding_paid = collateral * funding_fee_sum / DENOM = 190,749e18 -> -> //result with precision loss -> block_per_year = 15_778_800 -> funding_fee_sum = block_per_year * funding_fee = 15778800 * 10 = 157788000 -> borrowing_long_sum = block_per_year * borrowing_fee = 15778800 * 10 = 157788000 -> -> borrowing_paid = collateral * borrowing_long_sum / DENOM = 1_099_999e18 * 157788000 / 1e9 = 173,566e18 -> -> funding_paid = collateral * funding_fee_sum / DENOM = 173,566e18 -> -> result:1% difference exists in result -> -> -> -> -> +> Escalate +> +> Let’s assume +> interest = 1\_099\_999e18 +> reserve=10\_000\_000e18 +> max\_fee = 100 +> long\_utilization = interest / reserve / 100 = 1\_099\_999e18 / 10\_000\_000e18 / 100 = 10.99 ~ 10 //round down +> borrowing\_fee = max\_fee * long\_utillization / 100 = 100 * 10.99 / 100 = 10.99 +> after one year +> +> +> //result without precision loss +> block\_per\_year = 15\_778\_800 +> funding\_fee\_sum = block\_per\_year * funding\_fee = 15778800 * 10.99 = 173,409,012 +> borrowing\_long\_sum = block\_per\_year * borrowing\_fee = 15778800 * 10.99 = 173,409,012 +> +> borrowing\_paid = collateral * borrowing\_long\_sum / DENOM = 1\_099\_999e18 * 173,409,012 / 1e9 = 190,749e18 +> +> funding\_paid = collateral * funding\_fee\_sum / DENOM = 190,749e18 +> +> //result with precision loss +> block\_per\_year = 15\_778\_800 +> funding\_fee\_sum = block\_per\_year * funding\_fee = 15778800 * 10 = 157788000 +> borrowing\_long\_sum = block\_per\_year * borrowing\_fee = 15778800 * 10 = 157788000 +> +> borrowing\_paid = collateral * borrowing\_long\_sum / DENOM = 1\_099\_999e18 * 157788000 / 1e9 = 173,566e18 +> +> funding\_paid = collateral * funding\_fee\_sum / DENOM = 173,566e18 +> +> result:1\% difference exists in result +> +> +> +> +> > You've created a valid escalation! @@ -1802,61 +1820,61 @@ You may delete or edit your escalation comment anytime before the 48-hour escala **WangSecurity** -To clarify, due to this precision loss, there will be a 1% loss of the funding fee or am I missing something? +To clarify, due to this precision loss, there will be a 1\% loss of the funding fee or am I missing something? **rickkk137** - - -open_interest = 1,099,999e6 -reserve = 10,000,000e6 - -long_util = open_interest / reserve / 100 = 10.99 -borrowing_fee = max_fee * long_util / 100 = 100 * 10.99 / 100 = 10.99 -funding_fee = borrowing_fee * long_util / 100 = 10.99 * 10.99 / 100 = 1.20 - - -lets assume there is a long position with 10000e6 colleral and user want to close his position after a year[15778800 blocks per year] - -**result without precision loss** - - -borrowing_paid = collateral * borrowing_sum / DENOM - -borrowing_paid = 10,000e6 * 15778800 * 10.99 / 1e9 = 1,734,090,120[its mean user has to pay $1734 as borrowing fee] -funding_paid = collateral * funding_sum / DENOM = 10,000e6 * 1.20 * 15778800 / 1e9 = 189,345,600[its mean user has to pay $189 as funding fee] - -**result with precision loss** - -borrowing_paid = collateral * borrowing_sum / DENOM -borrowing_paid = 10,000e6 * 15778800 * 10 / 1e9 = 1,577,880,000[its mean user has to pay $1,577 as borrowing fee] -funding_paid = collateral * funding_sum / DENOM = 10,000e6 * 1 * 15778800 / 1e9 = 157,788,000[its mean user has to pay $157 as funding fee] - - -LPs loss = $157[~1%] -user pay $32 less than expected [32 * 100 / 189 ~ 16%] - - - - - - - + + +open\_interest = 1,099,999e6 +reserve = 10,000,000e6 + +long\_util = open\_interest / reserve / 100 = 10.99 +borrowing\_fee = max\_fee * long\_util / 100 = 100 * 10.99 / 100 = 10.99 +funding\_fee = borrowing\_fee * long\_util / 100 = 10.99 * 10.99 / 100 = 1.20 + + +lets assume there is a long position with 10000e6 colleral and user want to close his position after a year[15778800 blocks per year] + +**result without precision loss** + + +borrowing\_paid = collateral * borrowing\_sum / DENOM + +borrowing\_paid = 10,000e6 * 15778800 * 10.99 / 1e9 = 1,734,090,120[its mean user has to pay \$1734 as borrowing fee] +funding\_paid = collateral * funding\_sum / DENOM = 10,000e6 * 1.20 * 15778800 / 1e9 = 189,345,600[its mean user has to pay \$189 as funding fee] + +**result with precision loss** + +borrowing\_paid = collateral * borrowing\_sum / DENOM +borrowing\_paid = 10,000e6 * 15778800 * 10 / 1e9 = 1,577,880,000[its mean user has to pay \$1,577 as borrowing fee] +funding\_paid = collateral * funding\_sum / DENOM = 10,000e6 * 1 * 15778800 / 1e9 = 157,788,000[its mean user has to pay \$157 as funding fee] + + +LPs loss = \$157[~1\%] +user pay \$32 less than expected [32 * 100 / 189 ~ 16\%] + + + + + + + **rickkk137** -#60 dup of this issue +\#60 dup of this issue **WangSecurity** -I agree that this issue is correct and indeed identifies the precision loss showcasing the 1% loss. Planning to accept the escalation and validate with medium severity. +I agree that this issue is correct and indeed identifies the precision loss showcasing the 1\% loss. Planning to accept the escalation and validate with medium severity. **WangSecurity** -Result: -Medium +Result: +Medium Has duplicates **sherlock-admin3** @@ -1864,14 +1882,19 @@ Has duplicates Escalations have been resolved successfully! Escalation status: -- [rickkk137](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/72/#issuecomment-2348564086): accepted +- [rickkk137](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/72/\#issuecomment-2348564086): accepted + + # Issue M-3: LPs cannot specify min amount received in burn function, causing loss of fund for them Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/74 +The protocol has acknowledged this issue. + ## Found by PASCAL, pashap9990 + ### Summary LPs cannot set minimum base or quote amounts when burning LP tokens, leading to potential losses due to price fluctuations during transactions. @@ -1952,22 +1975,20 @@ def test_lost_assets(setup, VEL, STX, lp_provider, LP, pools, math, open, long, Consider adding min_base_amount and min_quote_amount to the burn function's params or adding min_assets_value for example when the price is $2 LPs set this param to $14800, its mean received value worse has to be greater than $14800 - - ## Discussion **rickkk137** -Escalate -LP token price directly compute based pool reserve and total supply lp token and the issue clearly states received amount can be less than expected amount and in Coded PoC liquidity provider expected $15000 but in result get $12500 - -loss = $2500[1.6%] ->Causes a loss of funds but requires certain external conditions or specific states, or a loss is highly constrained. The loss of the affected party must exceed 0.01% and 10 USD +Escalate +LP token price directly compute based pool reserve and total supply lp token and the issue clearly states received amount can be less than expected amount and in Coded PoC liquidity provider expected \$15000 but in result get \$12500 + +loss = \$2500[1.6\%] +>Causes a loss of funds but requires certain external conditions or specific states, or a loss is highly constrained. The loss of the affected party must exceed 0.01\% and 10 USD **sherlock-admin3** -> Escalate +> Escalate > >Slippage related issues showing a definite loss of funds with a detailed explanation for the same can be considered valid high You've created a valid escalation! @@ -1982,16 +2003,16 @@ However, the LPs can add input slippage parameters, i.e. `desired` and `slippage **rickkk137** -@WangSecurity `desired` and `slippage` just has been used to [control price which fetch from oracle](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/oracle.vy#L122) and protocol uses of that for converting quote to base or base to quote to compute total pool's reserve in terms of quote token but there is 2 tips here -- burn value = [lp_amt * pool_reserve / total_supply_lp](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/pools.vy#L215C28-L215C29) -- [pool_reserve will be decreased when users closes their profitable positions](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy#L203) - -let's examine an example with together: -u have 1000 lp token and u want convert them to usd and pool_reserve and total_supply_lp is 1000 in our example, -burn_value = lp_amt * pool_reserve / total_supply = 1000 * 1000 / 1000 = 1000 usd -based on above value u send your transaction to network but a close profitable transaction will be executed before your transaction and get $100 as payout,its mean pool reserve is 900 -burn_value = lp_amt * pool_reserve / total_supply = 1000 * 900 / 1000 = 900 usd -u get $900 instead of $1000 and u cannot control this situation as a user +@WangSecurity `desired` and `slippage` just has been used to [control price which fetch from oracle](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/oracle.vy\#L122) and protocol uses of that for converting quote to base or base to quote to compute total pool's reserve in terms of quote token but there is 2 tips here +- burn value = [lp\_amt * pool\_reserve / total\_supply\_lp](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/pools.vy\#L215C28-L215C29) +- [pool\_reserve will be decreased when users closes their profitable positions](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy\#L203) + +let's examine an example with together: +u have 1000 lp token and u want convert them to usd and pool\_reserve and total\_supply\_lp is 1000 in our example, +burn\_value = lp\_amt * pool\_reserve / total\_supply = 1000 * 1000 / 1000 = 1000 usd +based on above value u send your transaction to network but a close profitable transaction will be executed before your transaction and get \$100 as payout,its mean pool reserve is 900 +burn\_value = lp\_amt * pool\_reserve / total\_supply = 1000 * 900 / 1000 = 900 usd +u get \$900 instead of \$1000 and u cannot control this situation as a user **WangSecurity** @@ -1999,8 +2020,8 @@ Yeah, I see, thank you. Indeed, the situation which would cause an issue to the **WangSecurity** -Result: -Medium +Result: +Medium Has duplicates **sherlock-admin4** @@ -2008,14 +2029,19 @@ Has duplicates Escalations have been resolved successfully! Escalation status: -- [rickkk137](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/74/#issuecomment-2348583408): accepted +- [rickkk137](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/74/\#issuecomment-2348583408): accepted + + # Issue M-4: Invalid Redstone oracle payload size prevents the protocol from working properly Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/75 +The protocol has acknowledged this issue. + ## Found by KupiaSec + ## Summary In api contract, it uses 224 bytes as maximum length for Redstone's oracle payload, but oracle price data and signatures of 3 signers exceeds 225 bytes thus reverting transactions. @@ -2046,20 +2072,18 @@ Manual Review ## Recommendation The upperbound size of payload array should be increased to satisfy Redstone oracle payload size. - - ## Discussion **KupiaSecAdmin** -Escalate - +Escalate + Information required for this issue to be rejected. **sherlock-admin3** -> Escalate -> +> Escalate +> > Information required for this issue to be rejected. You've created a valid escalation! @@ -2074,87 +2098,87 @@ You may delete or edit your escalation comment anytime before the 48-hour escala **KupiaSecAdmin** -@WangSecurity - The data size is fetched from this [transaction](https://snowtrace.io/tx/0x4a3b8cb8a5287f3da1d0026ea35a968b1f93e2c24bad3e587cef450b364e69ec?chainid=43114) which uses RedStone oracle. -The calldata size of the transaction is 9574 bytes, but the function receives 2 variables which sums up 64 bytes, so remaining bytes `9574 - 64 = 9510` bytes of data is for RedStone oracle payload. - +@WangSecurity - The data size is fetched from this [transaction](https://snowtrace.io/tx/0x4a3b8cb8a5287f3da1d0026ea35a968b1f93e2c24bad3e587cef450b364e69ec?chainid=43114) which uses RedStone oracle. +The calldata size of the transaction is 9574 bytes, but the function receives 2 variables which sums up 64 bytes, so remaining bytes `9574 - 64 = 9510` bytes of data is for RedStone oracle payload. + Maybe, it includes some additional oracle data, but the main point is that oracle price data of 3 oracles can't fit in 224 bytes. Because each oracle signature is 65 bytes which means it has 195 bytes of signature, and also the oracle data should include price, timestamp, and token name basically. So it never fits in 224 bytes. **rickkk137** -i used [minimal foundry](https://github.com/redstone-finance/minimal-foundry-repo) package to generate some payload -``` -bytes memory redstonePayload0 = getRedstonePayload("BTC:120:8,ETH:69:8"); -//result length 440 chars -425443000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002cb4178004554480000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000019b45a50001921f3ed12d000000200000022af29790252e02178ff033567c55b4145e48cbec0434729f363a01675d9a88d618583b722a092e3e6d9944b69c79cadf793b2950f1a99fd595933609984ef1c01c0001000000000002ed57011e0000 - bytes memory redstonePayload = getRedstonePayload("BTC:120:8"); -//result length 312 chars -425443000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002cb41780001921f3ed1d400000020000001e3cabe40b42498dccea014557946f3bae4d29783c9dc7deb499c5a2d6d1901412cba9924d1c1fdfc429c07361ed9fab2789e191791a31ecdbbd77ab95a373f491c0001000000000002ed57011e0000 - -def mint( - base_token : address, #ERC20 - quote_token : address, #ERC20 - lp_token : address, #ERC20Plus - base_amt : uint256, - quote_amt : uint256, - desired : uint256, - slippage : uint256, - @>>> payload : Bytes[224] -``` -payload parameter's length is 224 its mean we can pass a string with max length 448 character ,hence we can pass above payload to api functions,I want to say length of payload depend on number of symbols which we pass to get payload and max size for 2 symbol is 440 character +i used [minimal foundry](https://github.com/redstone-finance/minimal-foundry-repo) package to generate some payload +``` +bytes memory redstonePayload0 = getRedstonePayload("BTC:120:8,ETH:69:8"); +//result length 440 chars +425443000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002cb4178004554480000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000019b45a50001921f3ed12d000000200000022af29790252e02178ff033567c55b4145e48cbec0434729f363a01675d9a88d618583b722a092e3e6d9944b69c79cadf793b2950f1a99fd595933609984ef1c01c0001000000000002ed57011e0000 + bytes memory redstonePayload = getRedstonePayload("BTC:120:8"); +//result length 312 chars +425443000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002cb41780001921f3ed1d400000020000001e3cabe40b42498dccea014557946f3bae4d29783c9dc7deb499c5a2d6d1901412cba9924d1c1fdfc429c07361ed9fab2789e191791a31ecdbbd77ab95a373f491c0001000000000002ed57011e0000 + +def mint( + base\_token : address, \#ERC20 + quote\_token : address, \#ERC20 + lp\_token : address, \#ERC20Plus + base\_amt : uint256, + quote\_amt : uint256, + desired : uint256, + slippage : uint256, + @>>> payload : Bytes[224] +``` +payload parameter's length is 224 its mean we can pass a string with max length 448 character ,hence we can pass above payload to api functions,I want to say length of payload depend on number of symbols which we pass to get payload and max size for 2 symbol is 440 character **KupiaSecAdmin** -@rickkk137 - Bytes type is different from strings as documented [here](https://docs.vyperlang.org/en/stable/types.html#byte-arrays), Bytes[224] means 224 bytes. +@rickkk137 - Bytes type is different from strings as documented [here](https://docs.vyperlang.org/en/stable/types.html\#byte-arrays), Bytes[224] means 224 bytes. **rickkk137** -Each pair of hexadecimal digits represents one byte and in above examples first example's length is [440 / 2] 220 bytes and second one's length is [312/2] 156 bytes, further more in [redstoneExtractor contract](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/RedstoneExtractor.sol#L10) developer just uses index 0 which its mean requestRedstonePayload function just get one symbol as a parameter in Velar +Each pair of hexadecimal digits represents one byte and in above examples first example's length is [440 / 2] 220 bytes and second one's length is [312/2] 156 bytes, further more in [redstoneExtractor contract](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/RedstoneExtractor.sol\#L10) developer just uses index 0 which its mean requestRedstonePayload function just get one symbol as a parameter in Velar **KupiaSecAdmin** -@rickkk137 - Seems you misunderstand something here. -You generated Redstone oracle payload using their example repo that results in 220 bytes, this is the oracle data of one signer. For Verla, it requires oracle data of 3 signers so that it can take median price of 3 prices. - +@rickkk137 - Seems you misunderstand something here. +You generated Redstone oracle payload using their example repo that results in 220 bytes, this is the oracle data of one signer. For Verla, it requires oracle data of 3 signers so that it can take median price of 3 prices. + And I agree with the statement that it uses one symbol, yes Verla only requires BTC price from Redstone oracle. **rickkk137** -Data is packed into a message according to the following structure - ->The data signature is verified by checking if the signer is one of the approved providers - -its mean just one sign there is in every payload - -| Symbol | Value | Timestamp | Size(n) | Signature| -| -------- | -------- | -------- | -------- | -------- | -| 32b | 32b | 32b | 1b | 65b | - -max size for 1 symbol: 32 + 32 + 32 + 1 + 65 = 172 bytes -https://github.com/redstone-finance/redstone-evm-connector - -```solidity -function getAuthorisedSignerIndex( - address signerAddress - ) public view virtual override returns (uint8) { - if (signerAddress == 0x8BB8F32Df04c8b654987DAaeD53D6B6091e3B774) { - return 0; - } else if (signerAddress == 0xdEB22f54738d54976C4c0fe5ce6d408E40d88499) { - return 1; - } else if (signerAddress == 0x51Ce04Be4b3E32572C4Ec9135221d0691Ba7d202) { - return 2; - } else if (signerAddress == 0xDD682daEC5A90dD295d14DA4b0bec9281017b5bE) { - return 3; - } else if (signerAddress == 0x71d00abE308806A3bF66cE05CF205186B0059503) { - return 4; - } else { - revert SignerNotAuthorised(signerAddress); - } - } -``` - - - +Data is packed into a message according to the following structure + +>The data signature is verified by checking if the signer is one of the approved providers + +its mean just one sign there is in every payload + +| Symbol | Value | Timestamp | Size(n) | Signature| +| -------- | -------- | -------- | -------- | -------- | +| 32b | 32b | 32b | 1b | 65b | + +max size for 1 symbol: 32 + 32 + 32 + 1 + 65 = 172 bytes +https://github.com/redstone-finance/redstone-evm-connector + +```solidity +function getAuthorisedSignerIndex( + address signerAddress + ) public view virtual override returns (uint8) { + if (signerAddress == 0x8BB8F32Df04c8b654987DAaeD53D6B6091e3B774) { + return 0; + } else if (signerAddress == 0xdEB22f54738d54976C4c0fe5ce6d408E40d88499) { + return 1; + } else if (signerAddress == 0x51Ce04Be4b3E32572C4Ec9135221d0691Ba7d202) { + return 2; + } else if (signerAddress == 0xDD682daEC5A90dD295d14DA4b0bec9281017b5bE) { + return 3; + } else if (signerAddress == 0x71d00abE308806A3bF66cE05CF205186B0059503) { + return 4; + } else { + revert SignerNotAuthorised(signerAddress); + } + } +``` + + + **WangSecurity** @@ -2163,58 +2187,58 @@ function getAuthorisedSignerIndex( **KupiaSecAdmin** -@WangSecurity - It comes from the RedStone implementation that Verla uses: https://github.com/redstone-finance/redstone-oracles-monorepo/blob/2bbf16cbbaa36f7046034dbbd968f3673a0657e8/packages/evm-connector/contracts/data-services/PrimaryProdDataServiceConsumerBase.sol#L12-L14 - -And you know, usually, using one signer data as oracle causes issue because its data can be malicious, that's how the protocol takes 3 signers and take median price among them. +@WangSecurity - It comes from the RedStone implementation that Verla uses: https://github.com/redstone-finance/redstone-oracles-monorepo/blob/2bbf16cbbaa36f7046034dbbd968f3673a0657e8/packages/evm-connector/contracts/data-services/PrimaryProdDataServiceConsumerBase.sol\#L12-L14 + +And you know, usually, using one signer data as oracle causes issue because its data can be malicious, that's how the protocol takes 3 signers and take median price among them. **WangSecurity** -Unfortunately, I'm still not convinced enough this is actually a valid finding. Firstly, the transaction linked before, which has 9574 bytes size of calldata and uses the RedStone oracle, doesn't have the Redstone's payload as an input parameter as Velar does it. -Secondly, this transaction is on Avalanche, while Velar will be deployed on Bob. - -Hence, this is not a sufficient argument that 224 Bytes won't be enough. -Thirdly, `payload` is used when calling the [`extract_price`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/RedstoneExtractor.sol#L6) function which doesn't even use that payload. Hence, I don't see a sufficient argument for this being a medium, but before making the decision, I'm giving some time to correct my points. +Unfortunately, I'm still not convinced enough this is actually a valid finding. Firstly, the transaction linked before, which has 9574 bytes size of calldata and uses the RedStone oracle, doesn't have the Redstone's payload as an input parameter as Velar does it. +Secondly, this transaction is on Avalanche, while Velar will be deployed on Bob. + +Hence, this is not a sufficient argument that 224 Bytes won't be enough. +Thirdly, `payload` is used when calling the [`extract\_price`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/RedstoneExtractor.sol\#L6) function which doesn't even use that payload. Hence, I don't see a sufficient argument for this being a medium, but before making the decision, I'm giving some time to correct my points. **rickkk137** -@WangSecurity u can pass n asset to fetchPayload function and that isn't const its mean payload length is flexible which depend on protocol and when we look at [extract_price](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/RedstoneExtractor.sol#L10) which just uses index 0 its mean they are suppose to pass just one symbol to redstone function to get payload and base on [redstone document](https://github.com/redstone-finance/redstone-evm-connector) payload's length just for one symbol is 172 bytes which is less than 224 bytes - -payload_size = n*(32 + 32) + 32 + 1 + 65 -payload_size_for_one_asset = 1 * (32 + 32) + 32 + 1 + 65 = 172 bytes -payload_size_for_two_asset = 2 * (32 + 32) + 32 + 1 + 65 = 226 bytes -payload_size_for_three_asset = 3 * (32 + 32) + 32 + 1 + 65 = 290 bytes -... - +@WangSecurity u can pass n asset to fetchPayload function and that isn't const its mean payload length is flexible which depend on protocol and when we look at [extract\_price](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/RedstoneExtractor.sol\#L10) which just uses index 0 its mean they are suppose to pass just one symbol to redstone function to get payload and base on [redstone document](https://github.com/redstone-finance/redstone-evm-connector) payload's length just for one symbol is 172 bytes which is less than 224 bytes + +payload\_size = n*(32 + 32) + 32 + 1 + 65 +payload\_size\_for\_one\_asset = 1 * (32 + 32) + 32 + 1 + 65 = 172 bytes +payload\_size\_for\_two\_asset = 2 * (32 + 32) + 32 + 1 + 65 = 226 bytes +payload\_size\_for\_three\_asset = 3 * (32 + 32) + 32 + 1 + 65 = 290 bytes +... + **KupiaSecAdmin** -@WangSecurity - The 9574 bytes of payload is one example of Redstone payload. - +@WangSecurity - The 9574 bytes of payload is one example of Redstone payload. + The point is that it's obvious the price data can't fit in 224 bytes. As @rickkk137 mentioned, payload size for one asset of one signer is 172 bytes, but Verla requires oracle data of 3 signers, which will result in >500 bytes. **rickkk137** -Velar protocol uses version [0.6.1 redstone-evm-connector](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/package.json#L3) and in this version `RedstoneConsumerBase::getUniqueSignersThreshold` returns 1 in this path `/ -@redstone-finance/evm-connector -/ -contracts -/ -core -/ -RedstoneConsumerBase.sol`,hence just one signer is required -https://www.npmjs.com/package/@redstone-finance/evm-connector/v/0.6.1?activeTab=code -also when we look at [make file](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/GNUmakefile#L28C22-L28C42) we realized Velar's developers directly copy RedstoneConsumerBase contract without any changes,furthermore usingDataService function get unique signer as a parameter and when they restricted payload to 224 bytes its mean they want to pass 1 as a uniqueSignersCount -```typescript - - const wrappedContract = WrapperBuilder.wrap(contract).usingDataService({ - dataServiceId: "redstone-rapid-demo", - @>>> uniqueSignersCount: 1, - dataFeeds: ["BTC", "ETH", "BNB", "AR", "AVAX", "CELO"], - }); -``` -https://github.com/redstone-finance/redstone-showroom/blob/0db580be39bdccb9632ee4d8d8c80e4182d8e266/example/getTokensPrices.ts#L22 +Velar protocol uses version [0.6.1 redstone-evm-connector](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/package.json\#L3) and in this version `RedstoneConsumerBase::getUniqueSignersThreshold` returns 1 in this path `/ +@redstone-finance/evm-connector +/ +contracts +/ +core +/ +RedstoneConsumerBase.sol`,hence just one signer is required +https://www.npmjs.com/package/@redstone-finance/evm-connector/v/0.6.1?activeTab=code +also when we look at [make file](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/GNUmakefile\#L28C22-L28C42) we realized Velar's developers directly copy RedstoneConsumerBase contract without any changes,furthermore usingDataService function get unique signer as a parameter and when they restricted payload to 224 bytes its mean they want to pass 1 as a uniqueSignersCount +```typescript + + const wrappedContract = WrapperBuilder.wrap(contract).usingDataService({ + dataServiceId: "redstone-rapid-demo", + @>>> uniqueSignersCount: 1, + dataFeeds: ["BTC", "ETH", "BNB", "AR", "AVAX", "CELO"], + }); +``` +https://github.com/redstone-finance/redstone-showroom/blob/0db580be39bdccb9632ee4d8d8c80e4182d8e266/example/getTokensPrices.ts\#L22 **KupiaSecAdmin** @@ -2222,12 +2246,12 @@ https://github.com/redstone-finance/redstone-showroom/blob/0db580be39bdccb9632ee **WangSecurity** -As I understand, @KupiaSecAdmin is indeed correct here, and the current payload size won't work when the contracts are deployed on the Live chain. After clarifying with the sponsor, they've said they used 224 only for testnet and will increase it for the live chain, but there's no info about it in README or code comments. Hence, this should be indeed a valid issue. Planning to accept the escalation and validate with Medium severity. Medium severity because there is no loss of funds, and the second definition of High severity excludes contracts not working: -> Inflicts serious non-material losses (doesn't include contract simply not working). - -Hence, medium severity is appropriate: -> Breaks core contract functionality, rendering the contract useless or leading to loss of funds. - +As I understand, @KupiaSecAdmin is indeed correct here, and the current payload size won't work when the contracts are deployed on the Live chain. After clarifying with the sponsor, they've said they used 224 only for testnet and will increase it for the live chain, but there's no info about it in README or code comments. Hence, this should be indeed a valid issue. Planning to accept the escalation and validate with Medium severity. Medium severity because there is no loss of funds, and the second definition of High severity excludes contracts not working: +> Inflicts serious non-material losses (doesn't include contract simply not working). + +Hence, medium severity is appropriate: +> Breaks core contract functionality, rendering the contract useless or leading to loss of funds. + Are there any duplicates? **KupiaSecAdmin** @@ -2236,8 +2260,8 @@ Are there any duplicates? **WangSecurity** -Result: -Medium +Result: +Medium Unique **sherlock-admin4** @@ -2245,14 +2269,19 @@ Unique Escalations have been resolved successfully! Escalation status: -- [KupiaSecAdmin](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/75/#issuecomment-2344252146): accepted +- [KupiaSecAdmin](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/75/\#issuecomment-2344252146): accepted + + # Issue M-5: Not decreasing oracle timestamp validation leads to DoS for protocol users Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/79 +The protocol has acknowledged this issue. + ## Found by KupiaSec + ## Summary The protocol only allows equal or increased timestamp of oracle prices whenever an action happens in the protocol. This validation is wrong since it will lead to DoS for users. @@ -2297,20 +2326,18 @@ Manual Review It's recommended to remove that non-decreasing timestamp validation. If the protocol wants more strict oracle price validation than the RedStone does, it can just use the difference between oracle timestamp and current timestamp. - - ## Discussion **KupiaSecAdmin** -Escalate - +Escalate + Information required for this issue to be rejected. **sherlock-admin3** -> Escalate -> +> Escalate +> > Information required for this issue to be rejected. You've created a valid escalation! @@ -2321,37 +2348,37 @@ You may delete or edit your escalation comment anytime before the 48-hour escala **rickkk137** - -the protocol for every action fetch a redstone's payload and attach that to the transaction let's assume: -User A:fetch payload a in T1 -user B:fetch payload b in T2 -user C:fetch payload c in T3 -and T3 > T2 > T1 and block.timestamp-[T1,T2,T3] < 3 minutes(I mean all of them are valid) -if all of them send their Txs to the network sequence is very important its mean just with this sequence [t1,t2,t3] none of TXs wouldn't be reverted, -if t2 will be executed first then t1 will be reverted and if t3 will be executed first t1 and t2 will be reverted - -```diff - contract RedstoneExtractor is PrimaryProdDataServiceConsumerBase { -+ uint lastTimeStamp; -+ uint price; - function extractPrice(bytes32 feedId, bytes calldata) - public view returns(uint256, uint256) - { -+ if(block.timestamp - lastTimeStamp > 3 minutes){ - bytes32[] memory dataFeedIds = new bytes32[](1); - dataFeedIds[0] = feedId; - (uint256[] memory values, uint256 timestamp) = - getOracleNumericValuesAndTimestampFromTxMsg(dataFeedIds); - validateTimestamp(timestamp); //!!! -- return (values[0], timestamp); -+ lastTimeStamp = block.timestamp; -+ price = values[0]; -+ } -+ return (price, lastTimeStamp); - } - } - ``` - + +the protocol for every action fetch a redstone's payload and attach that to the transaction let's assume: +User A:fetch payload a in T1 +user B:fetch payload b in T2 +user C:fetch payload c in T3 +and T3 > T2 > T1 and block.timestamp-[T1,T2,T3] < 3 minutes(I mean all of them are valid) +if all of them send their Txs to the network sequence is very important its mean just with this sequence [t1,t2,t3] none of TXs wouldn't be reverted, +if t2 will be executed first then t1 will be reverted and if t3 will be executed first t1 and t2 will be reverted + +```diff + contract RedstoneExtractor is PrimaryProdDataServiceConsumerBase { ++ uint lastTimeStamp; ++ uint price; + function extractPrice(bytes32 feedId, bytes calldata) + public view returns(uint256, uint256) + { ++ if(block.timestamp - lastTimeStamp > 3 minutes){ + bytes32[] memory dataFeedIds = new bytes32[](1); + dataFeedIds[0] = feedId; + (uint256[] memory values, uint256 timestamp) = + getOracleNumericValuesAndTimestampFromTxMsg(dataFeedIds); + validateTimestamp(timestamp); //!!! +- return (values[0], timestamp); ++ lastTimeStamp = block.timestamp; ++ price = values[0]; ++ } ++ return (price, lastTimeStamp); + } + } + ``` + But there isn't loss of funds ,users can repeat their TXs **KupiaSecAdmin** @@ -2360,35 +2387,35 @@ But there isn't loss of funds ,users can repeat their TXs **rickkk137** -I agree with u and this can be problematic and Here's an [example](https://github.com/euler-xyz/euler-price-oracle/blob/eeb1847df7d9d58029de37225dabf963bf1a65e6/src/adapter/redstone/RedstoneCoreOracle.sol#L71C9-L72C75) of this approach but the issue's final result depend on sherlock rules +I agree with u and this can be problematic and Here's an [example](https://github.com/euler-xyz/euler-price-oracle/blob/eeb1847df7d9d58029de37225dabf963bf1a65e6/src/adapter/redstone/RedstoneCoreOracle.sol\#L71C9-L72C75) of this approach but the issue's final result depend on sherlock rules **WangSecurity** -Not sure I understand how this can happen. - -For example, we fetched the price from RedStone at timestamp = 10. -Then someone fetches the price again, and for the revert to happen, the timestamp of that price has to be 9, correct? - +Not sure I understand how this can happen. + +For example, we fetched the price from RedStone at timestamp = 10. +Then someone fetches the price again, and for the revert to happen, the timestamp of that price has to be 9, correct? + How could that happen? Is it the user who chooses the price? **KupiaSecAdmin** -@WangSecurity - As a real-world example, there will be multiple users who get price from RedStone and call Verla protocol with that price data. NOTE: price data(w/ timestamp) is added as calldata for every call. -So among those users, there will be users calling protocol with price timestamp 8, some with 9, some with 10. - +@WangSecurity - As a real-world example, there will be multiple users who get price from RedStone and call Verla protocol with that price data. NOTE: price data(w/ timestamp) is added as calldata for every call. +So among those users, there will be users calling protocol with price timestamp 8, some with 9, some with 10. + If one call with price timestamp 10 is called, other remaining calls with price timestamp 8 and 9 will revert. **WangSecurity** -Thank you for this clarification. Indeed, this is possible, and it can happen even intentionally. On the other hand, this is only one-block DOS and the users could proceed with the transactions in the next block (assuming they use the newer price). However, this vulnerability affects the opening and closing of positions, which are time-sensitive functions. Hence, this should be a valid medium based on this rule: -> Griefing for gas (frontrunning a transaction to fail, even if can be done perpetually) is considered a DoS of a single block, hence only if the function is clearly time-sensitive, it can be a Medium severity issue. - +Thank you for this clarification. Indeed, this is possible, and it can happen even intentionally. On the other hand, this is only one-block DOS and the users could proceed with the transactions in the next block (assuming they use the newer price). However, this vulnerability affects the opening and closing of positions, which are time-sensitive functions. Hence, this should be a valid medium based on this rule: +> Griefing for gas (frontrunning a transaction to fail, even if can be done perpetually) is considered a DoS of a single block, hence only if the function is clearly time-sensitive, it can be a Medium severity issue. + Planning to accept the escalation and validate with medium severity. **WangSecurity** -Result: -Medium +Result: +Medium Unique **sherlock-admin4** @@ -2396,14 +2423,19 @@ Unique Escalations have been resolved successfully! Escalation status: -- [KupiaSecAdmin](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/79/#issuecomment-2344257659): accepted +- [KupiaSecAdmin](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/79/\#issuecomment-2344257659): accepted + + # Issue M-6: Usage of `tx.origin` to determine the user is prone to attacks Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/82 +The protocol has acknowledged this issue. + ## Found by Bauer, Greed, Japy69, KupiaSec, Waydou, bughuntoor, ctf\_sec, y4y + ## Summary Usage of `tx.origin` to determine the user is prone to attacks @@ -2433,20 +2465,18 @@ Manual Review ## Recommendation Instead of using `tx.origin` in `core.vy`, simply pass `msg.sender` as a parameter from `api.vy` - - ## Discussion **T1MOH593** -Escalate - +Escalate + Noticed there were 19 escalations on preliminary valid issues. This is final escalation to make it 20/20 🙂 **sherlock-admin3** -> Escalate -> +> Escalate +> > Noticed there were 19 escalations on preliminary valid issues. This is final escalation to make it 20/20 🙂 You've created a valid escalation! @@ -2465,8 +2495,8 @@ Planning to reject the escalation and leave the issue as it is. **WangSecurity** -Result: -Medium +Result: +Medium Has duplicates **sherlock-admin4** @@ -2474,14 +2504,19 @@ Has duplicates Escalations have been resolved successfully! Escalation status: -- [T1MOH593](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/82/#issuecomment-2347091773): rejected +- [T1MOH593](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/82/\#issuecomment-2347091773): rejected + + # Issue M-7: Funding Paid != Funding Received Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/83 +The protocol has acknowledged this issue. + ## Found by 0x37, 0xbranded, bughuntoor + ## Summary Due to special requirements around receiving funding fees for a position, the funding fees received can be less than that paid. These funding fee payments are still payed, but a portion of them will not be withdrawn, and become stuck funds. This also violates the contract specification that `sum(funding_received) = sum(funding_paid)`. @@ -2547,20 +2582,18 @@ Consider an alternative method of accounting for funding fees, as there are many For example, include a new state variable that explicitly tracks unpaid funding fee payments and perform some pro rata or market adjustment to future funding fee recipients, specifically for *that token*. - - ## Discussion **spacegliderrrr** -Escalate - +Escalate + Issue should be invalidated. It does not showcase any Medium-severity impact, but rather just a slightly incorrect code comment. Contest readme did not include said invariant as one that must always be true, so simply breaking it slightly does not warrant Medium severity. **sherlock-admin3** -> Escalate -> +> Escalate +> > Issue should be invalidated. It does not showcase any Medium-severity impact, but rather just a slightly incorrect code comment. Contest readme did not include said invariant as one that must always be true, so simply breaking it slightly does not warrant Medium severity. You've created a valid escalation! @@ -2571,28 +2604,28 @@ You may delete or edit your escalation comment anytime before the 48-hour escala **msheikhattari** -> If the protocol team provides specific information in the README or CODE COMMENTS, that information stands above all judging rules. - -The included code comment was an explicit invariant outlined by the team: -``` -# - the protocol handles the accounting needed to maintain the sytem invariants: -# * funding payments match -# sum(funding_received) = sum(funding_paid) -``` - -The problematic code was actually intended to enforce this invariant, however it does so incorrectly. In the case of a negative position due to fee underpayment, `sum(funding_received) > sum(funding_paid)`. To correct for this, or serve as a deterrent, these positions will not receive their funding payment. However the token in which they underpaid is not the same token that they will not receive funding payment for. As a result the imbalance between funding paid and received is not corrected - it is actually worsened. - -Not only that, but users may have paid their full funding payment and the `sum(funding_received) = sum(funding_paid)` invariant holds. But if the remaining balance was then not enough to cover their borrow fee, they will not receive their funding payment which would actually cause this invariant to break. - +> If the protocol team provides specific information in the README or CODE COMMENTS, that information stands above all judging rules. + +The included code comment was an explicit invariant outlined by the team: +``` +\# - the protocol handles the accounting needed to maintain the sytem invariants: +\# * funding payments match +\# sum(funding\_received) = sum(funding\_paid) +``` + +The problematic code was actually intended to enforce this invariant, however it does so incorrectly. In the case of a negative position due to fee underpayment, `sum(funding\_received) > sum(funding\_paid)`. To correct for this, or serve as a deterrent, these positions will not receive their funding payment. However the token in which they underpaid is not the same token that they will not receive funding payment for. As a result the imbalance between funding paid and received is not corrected - it is actually worsened. + +Not only that, but users may have paid their full funding payment and the `sum(funding\_received) = sum(funding\_paid)` invariant holds. But if the remaining balance was then not enough to cover their borrow fee, they will not receive their funding payment which would actually cause this invariant to break. + This specification is the source of truth, and the code clearly does not abide by it. The issue proposes alternative methods for accounting funding fee payments to ensure this invariant consistently holds. **msheikhattari** -Also I do think this issue is similar to #18 and #93 - -Will leave it to HoJ if they are combined since the source of the error is the same, but different impacts are described. - -This issue discusses the code being not to spec and breaking an invariant. +Also I do think this issue is similar to \#18 and \#93 + +Will leave it to HoJ if they are combined since the source of the error is the same, but different impacts are described. + +This issue discusses the code being not to spec and breaking an invariant. The other two issues mention a known issue from the sponsor of stuck/undistributed funds **WangSecurity** @@ -2601,57 +2634,57 @@ The other two issues mention a known issue from the sponsor of stuck/undistribut **spacegliderrrr** -@WangSecurity Most of your points are correct. Yes the code works as expected. Funding fee comes before borrowing fee. But for both of them, since due to fail to liquidate position on time, fees can exceed the total position collateral. Because of this, funding fees are paid with priority and if there's anything left, borrowing fees are paid for as much as there's left. - +@WangSecurity Most of your points are correct. Yes the code works as expected. Funding fee comes before borrowing fee. But for both of them, since due to fail to liquidate position on time, fees can exceed the total position collateral. Because of this, funding fees are paid with priority and if there's anything left, borrowing fees are paid for as much as there's left. + In the case where funding fees are overpaid (for more than the total position collateral), the other side receives these fees in a FIFO order, which is also clearly stated in the comments. **msheikhattari** -@spacegliderrrr is correct, funding fees are paid before borrow fees. As such there is no evidence that the comment spec is outdated, nor does any other documentation such as the readme appear to indicate so. - -> In the case where funding fees are overpaid (for more than the total position collateral), the other side receives these fees in a FIFO order, which is also clearly stated in the comments. - -To elaborate on this point, the vulnerability describes the case that `funding_received` from the other side is greater than the `funding_paid`. While it is indeed acknowledged fees are received in FIFO order, this is a problematic mitigation for this issue. This current approach is a questionable means of correcting the imbalance of funding payments to receipts. There are many cases where it not only doesn't correct for the funding payment imbalance, but actually worsens it, as explained above. - +@spacegliderrrr is correct, funding fees are paid before borrow fees. As such there is no evidence that the comment spec is outdated, nor does any other documentation such as the readme appear to indicate so. + +> In the case where funding fees are overpaid (for more than the total position collateral), the other side receives these fees in a FIFO order, which is also clearly stated in the comments. + +To elaborate on this point, the vulnerability describes the case that `funding\_received` from the other side is greater than the `funding\_paid`. While it is indeed acknowledged fees are received in FIFO order, this is a problematic mitigation for this issue. This current approach is a questionable means of correcting the imbalance of funding payments to receipts. There are many cases where it not only doesn't correct for the funding payment imbalance, but actually worsens it, as explained above. + Regardless, this seems to be a clearly defined invariant. Not only from this comment but further implied by the logic of this fee handling, which penalizes negative positions to build up some "insurance" tokens to hedge against the case that funding fees are underpaid. It also intuitively makes sense for this invariant to generally hold as otherwise malfunctions can occur such as failure to close positions; several linked issues reported various related problems stemming from this invariant breaking as well. **WangSecurity** -Another question I've got after re-reading the report. The impact section says there will be stuck tokens in the contract which will never be paid. But, as I understand the problem is different. The issue is that these tokens are underpaid, e.g. if the funding fee is 10 tokens, but the remaining collateral is only 8, then there are 2 tokens underpaid. So, how the are stuck tokens in the contract, if the funding fee is not paid in full. Or it refers to the funding_received being larger than the funding_received actually? +Another question I've got after re-reading the report. The impact section says there will be stuck tokens in the contract which will never be paid. But, as I understand the problem is different. The issue is that these tokens are underpaid, e.g. if the funding fee is 10 tokens, but the remaining collateral is only 8, then there are 2 tokens underpaid. So, how the are stuck tokens in the contract, if the funding fee is not paid in full. Or it refers to the funding\_received being larger than the funding\_received actually? **WangSecurity** -Since no answer is provided, I assume it's a typo in the report and the contract doesn't have fewer tokens, since the fee is underpaid, not overpaid. +Since no answer is provided, I assume it's a typo in the report and the contract doesn't have fewer tokens, since the fee is underpaid, not overpaid. But, the issue here is not medium severity, because the position with collateral less than the funding fees and it cannot pay the full fee. Just breaking the invariant is not sufficient for medium severity. Planning to accept the escalation and invalidate the issue. **msheikhattari** -Apologies for the delayed response @WangSecurity - -Yes, the original statement was as intended. That portion of the report was pointing out that there are two problematic impacts of the current approach that cause `sum(funding_paid) != sum(funding_received)` -1. It's possible for `funding_received` by one side of the pool to exceed the `funding_paid` by the other side, in the case that a position went negative. -2. It's possible for `funding_received` to be less than `funding_paid`, even for positions which did not go insolvent due to funding rates. - -The outlined invariant is broken which results in loss of funds / broken functionality. It will prevent closing of some positions in the former case, and will result in some funds being stuck in the latter case. - +Apologies for the delayed response @WangSecurity + +Yes, the original statement was as intended. That portion of the report was pointing out that there are two problematic impacts of the current approach that cause `sum(funding\_paid) != sum(funding\_received)` +1. It's possible for `funding\_received` by one side of the pool to exceed the `funding\_paid` by the other side, in the case that a position went negative. +2. It's possible for `funding\_received` to be less than `funding\_paid`, even for positions which did not go insolvent due to funding rates. + +The outlined invariant is broken which results in loss of funds / broken functionality. It will prevent closing of some positions in the former case, and will result in some funds being stuck in the latter case. + The current approach of penalizing negative positions is intended to 'build up an insurance' as stated by the team. But as mentioned here, not only does it not remediate the issue it actually furthers the imbalance. The funding tokens have already been underpaid in the case of a negative position - preventing those positions from receiving their fees results in underpayment of their funding as well. **WangSecurity** -> It's possible for funding_received by one side of the pool to exceed the funding_paid by the other side, in the case that a position went negative. - -Could you elaborate on this? Similar to my analysis [here](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/86#issuecomment-2381323660), If the position is negative, but the collateral is not 0, the funding paid is larger than the position's collateral, the funding paid will be decreased to pos. collateral. The funding received will be 0 (if collateral is < funding paid, c1: remaining =0 and deducted =pos.collateral. Then c2: remaining =0 and deducted =0. funding received =0, because remaining =0. So, not sure how funding_received can be > funding paid and I need a more concrete example with numbers. - -> It's possible for funding_received to be less than funding_paid, even for positions which did not go insolvent due to funding rates. - -Agree that it's possible. - -> The outlined invariant is broken which results in loss of funds / broken functionality. It will prevent closing of some positions in the former case, and will result in some funds being stuck in the latter case. - -However, the report doesn't showcase that this broken invariant (when funding received < funding paid) will result in any of this. The report only says that it will lead to a dilution of fees, but there is still no explanation of how. - -Hence, without a concrete example of how this leads to positions not being closed, thus leading to losses of these positions, and owners, I cannot verify. - +> It's possible for funding\_received by one side of the pool to exceed the funding\_paid by the other side, in the case that a position went negative. + +Could you elaborate on this? Similar to my analysis [here](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/86\#issuecomment-2381323660), If the position is negative, but the collateral is not 0, the funding paid is larger than the position's collateral, the funding paid will be decreased to pos. collateral. The funding received will be 0 (if collateral is < funding paid, c1: remaining =0 and deducted =pos.collateral. Then c2: remaining =0 and deducted =0. funding received =0, because remaining =0. So, not sure how funding\_received can be > funding paid and I need a more concrete example with numbers. + +> It's possible for funding\_received to be less than funding\_paid, even for positions which did not go insolvent due to funding rates. + +Agree that it's possible. + +> The outlined invariant is broken which results in loss of funds / broken functionality. It will prevent closing of some positions in the former case, and will result in some funds being stuck in the latter case. + +However, the report doesn't showcase that this broken invariant (when funding received < funding paid) will result in any of this. The report only says that it will lead to a dilution of fees, but there is still no explanation of how. + +Hence, without a concrete example of how this leads to positions not being closed, thus leading to losses of these positions, and owners, I cannot verify. + I see that the invariant doesn't hold, but this is not sufficient for Medium severity (only broken invariants from README are assigned Medium, not invariants from code comments). Hence, the decision for now is to invalidate this issue and accept the escalation. But, if there's a POC, how does this lead to loss of funds in case funding received < funding paid, or how does funding received > funding paid and how does this lead to a loss of funds, I will consider changing my decision. **WangSecurity** @@ -2660,94 +2693,94 @@ If no answer is provided, planning to accept the escalation and invalidate the i **msheikhattari** -> I see that the invariant doesn't hold, but this is not sufficient for Medium severity (only broken invariants from README are assigned Medium, not invariants from code comments) - -Doesn't the below rule from the severity categorization apply here? That's what I had in mind when creating the issue, since this comment was the only source of documentation on this specification. There is no indication that this is out of date based on any other specs or code elsewhere. - -``` -Hierarchy of truth: If the protocol team provides no specific information, the default rules apply (judging guidelines). - -If the protocol team provides specific information in the README or CODE COMMENTS, that information stands above all judging rules. In case of contradictions between the README and CODE COMMENTS, the README is the chosen source of truth. -``` - -Nevertheless there are loss of funds from each case, let me first prove that `funding_received > funding_paid` is possible: - -> Then c2: remaining =0 and deducted =0. funding received =0, because remaining =0. - -This is the crux of the issue I am reporting here. That's true in the case of a single position, the issue is that `funding_received` of that position being set to 0 does not mitigate the underpayment of funding which it made by going negative - they are opposite tokens in the pair. - -While that positions `funding_payment` is capped at the collateral of the specific position, the other side of the pool will continue to accrue funding receipts from this portion of the collateral until it's liquidated: - -```vyper - paid_long_term : uint256 = self.apply(fs.long_collateral, fs.funding_long * new_terms) - received_short_term : uint256 = self.divide(paid_long_term, fs.short_collateral) -``` - -This is because this collateral is not excluded from `fs.long_collateral`, it must be explicitly subtracted upon liquidation. - -Now the issue causing loss of funds on this side is that positions will fail to close. On the other side, where `funding_received < funding_paid`, this is especially problematic in cases where the full funding payment was made, but the collateral fell short of the borrow fee. - -In this case, the balance `sum(funding_received) = sum(funding_paid)` was not broken by this position, as it made its full funding payment, but it will be excluded from receiving its portion of funding receipts. These tokens will not be directly claimable by any positions in the pool, causing loss of funds in that sense. - -Upholding this balance of funding payments to receipts is an important invariant which causes loss of funds and protocol disruptions as outlined above. This is even acknowledged by the team, since this current approach is meant to build up an "insurance" by penalizing negative positions to pay out future negative positions. - -What it fails to acknowledge is that the penalized position has already underpaid its funding fee (in token A), and now the balance is further distorted by eliminating its funding payment (in token B). To move closer to equilibrium between the two sides of funding, a direct approach such as socializing fee underpayments is recommended instead. - +> I see that the invariant doesn't hold, but this is not sufficient for Medium severity (only broken invariants from README are assigned Medium, not invariants from code comments) + +Doesn't the below rule from the severity categorization apply here? That's what I had in mind when creating the issue, since this comment was the only source of documentation on this specification. There is no indication that this is out of date based on any other specs or code elsewhere. + +``` +Hierarchy of truth: If the protocol team provides no specific information, the default rules apply (judging guidelines). + +If the protocol team provides specific information in the README or CODE COMMENTS, that information stands above all judging rules. In case of contradictions between the README and CODE COMMENTS, the README is the chosen source of truth. +``` + +Nevertheless there are loss of funds from each case, let me first prove that `funding\_received > funding\_paid` is possible: + +> Then c2: remaining =0 and deducted =0. funding received =0, because remaining =0. + +This is the crux of the issue I am reporting here. That's true in the case of a single position, the issue is that `funding\_received` of that position being set to 0 does not mitigate the underpayment of funding which it made by going negative - they are opposite tokens in the pair. + +While that positions `funding\_payment` is capped at the collateral of the specific position, the other side of the pool will continue to accrue funding receipts from this portion of the collateral until it's liquidated: + +```vyper + paid\_long\_term : uint256 = self.apply(fs.long\_collateral, fs.funding\_long * new\_terms) + received\_short\_term : uint256 = self.divide(paid\_long\_term, fs.short\_collateral) +``` + +This is because this collateral is not excluded from `fs.long\_collateral`, it must be explicitly subtracted upon liquidation. + +Now the issue causing loss of funds on this side is that positions will fail to close. On the other side, where `funding\_received < funding\_paid`, this is especially problematic in cases where the full funding payment was made, but the collateral fell short of the borrow fee. + +In this case, the balance `sum(funding\_received) = sum(funding\_paid)` was not broken by this position, as it made its full funding payment, but it will be excluded from receiving its portion of funding receipts. These tokens will not be directly claimable by any positions in the pool, causing loss of funds in that sense. + +Upholding this balance of funding payments to receipts is an important invariant which causes loss of funds and protocol disruptions as outlined above. This is even acknowledged by the team, since this current approach is meant to build up an "insurance" by penalizing negative positions to pay out future negative positions. + +What it fails to acknowledge is that the penalized position has already underpaid its funding fee (in token A), and now the balance is further distorted by eliminating its funding payment (in token B). To move closer to equilibrium between the two sides of funding, a direct approach such as socializing fee underpayments is recommended instead. + There are a few different moving parts so let me know if any component of this line of reasoning is unclear and I could provide specific examples if needed. **WangSecurity** -> Doesn't the below rule from the severity categorization apply here? That's what I had in mind when creating the issue, since this comment was the only source of documentation on this specification. There is no indication that this is out of date based on any other specs or code elsewhere - -I didn't say the rule applies here, but if there's something said in the code comments and the code doesn't follow it, the issue still has to have at least Medium severity to be valid: -> The protocol team can use the README (and only the README) to define language that indicates the codebase's restrictions and/or expected functionality. Issues that break these statements, irrespective of whether the impact is low/unknown, will be assigned Medium severity. High severity will be applied only if the issue falls into the High severity category in the judging guidelines. - -So if the issue breaks an invariant from code comments, but it doesn't have Medium severity, then it's invalid. - -> This is the crux of the issue I am reporting here. That's true in the case of a single position, the issue is that funding_received of that position being set to 0 does not mitigate the underpayment of funding which it made by going negative - they are opposite tokens in the pair. -While that positions funding_payment is capped at the collateral of the specific position, the other side of the pool will continue to accrue funding receipts from this portion of the collateral until it's liquidated: - -That still doesn't prove how `funding_recevied` can be > `funding_paid`. The function [`calc_fees`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy#L244) which is where this calculation of `funding_received` and `funding_paid` happens is called inside [`value`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy#L162) function which is called only inside [`close`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy#L389) and [`is_liquidatable`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy#L356) , which is called only inside [`liquidate`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/core.vy#L325). - -Hence, this calculation of funding paid and received will be made only when closing or liquidating the position. So, the following is wrong: -> the other side of the pool will continue to accrue funding receipts from this portion of the collateral until it's liquidated - -It won't accrue because the position is either already liquidated or closed. Hence, the scenario of `funding_received > funding_paid` is still not proven. - -About the `funding_received < funding_paid`. As I understand, it's intended that the position doesn't receive any funding fees in this scenario which evident by [code comment](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy#L259). - -So if the position didn't manage to pay the full `funding_paid` (underpaid the fees), they're intentionally excluded from receiving any funding fees and it's not a loss of funds and these will be received by other users. - -Hence, my decision remains: accept the escalation and invalidate the issue. If you still see that I'm wrong somewhere, you're welcome to correct me. But, to make it easier, provide links to the appropriate LOC you refer to. - +> Doesn't the below rule from the severity categorization apply here? That's what I had in mind when creating the issue, since this comment was the only source of documentation on this specification. There is no indication that this is out of date based on any other specs or code elsewhere + +I didn't say the rule applies here, but if there's something said in the code comments and the code doesn't follow it, the issue still has to have at least Medium severity to be valid: +> The protocol team can use the README (and only the README) to define language that indicates the codebase's restrictions and/or expected functionality. Issues that break these statements, irrespective of whether the impact is low/unknown, will be assigned Medium severity. High severity will be applied only if the issue falls into the High severity category in the judging guidelines. + +So if the issue breaks an invariant from code comments, but it doesn't have Medium severity, then it's invalid. + +> This is the crux of the issue I am reporting here. That's true in the case of a single position, the issue is that funding\_received of that position being set to 0 does not mitigate the underpayment of funding which it made by going negative - they are opposite tokens in the pair. +While that positions funding\_payment is capped at the collateral of the specific position, the other side of the pool will continue to accrue funding receipts from this portion of the collateral until it's liquidated: + +That still doesn't prove how `funding\_recevied` can be > `funding\_paid`. The function [`calc\_fees`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy\#L244) which is where this calculation of `funding\_received` and `funding\_paid` happens is called inside [`value`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy\#L162) function which is called only inside [`close`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy\#L389) and [`is\_liquidatable`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy\#L356) , which is called only inside [`liquidate`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/core.vy\#L325). + +Hence, this calculation of funding paid and received will be made only when closing or liquidating the position. So, the following is wrong: +> the other side of the pool will continue to accrue funding receipts from this portion of the collateral until it's liquidated + +It won't accrue because the position is either already liquidated or closed. Hence, the scenario of `funding\_received > funding\_paid` is still not proven. + +About the `funding\_received < funding\_paid`. As I understand, it's intended that the position doesn't receive any funding fees in this scenario which evident by [code comment](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/positions.vy\#L259). + +So if the position didn't manage to pay the full `funding\_paid` (underpaid the fees), they're intentionally excluded from receiving any funding fees and it's not a loss of funds and these will be received by other users. + +Hence, my decision remains: accept the escalation and invalidate the issue. If you still see that I'm wrong somewhere, you're welcome to correct me. But, to make it easier, provide links to the appropriate LOC you refer to. + **msheikhattari** -> Hence, this calculation of funding paid and received will be made only when closing or liquidating the position. So, the following is wrong: - -That's not quite correct. Yes, `calc_fees` is only called upon closing/liquidating the position. But, this only only calculates the user's [pro-rate share of the accrued interest](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/fees.vy#L265): - -```vyper -def calc(id: uint256, long: bool, collateral: uint256, opened_at: uint256) -> SumFees: - period: Period = self.query(id, opened_at) - P_b : uint256 = self.apply(collateral, period.borrowing_long) if long else ( - self.apply(collateral, period.borrowing_short) ) - P_f : uint256 = self.apply(collateral, period.funding_long) if long else ( - self.apply(collateral, period.funding_short) ) - R_f : uint256 = self.multiply(collateral, period.received_long) if long else ( - self.multiply(collateral, period.received_short) ) - - - return SumFees({funding_paid: P_f, funding_received: R_f, borrowing_paid: P_b}) -``` - -The terms `period.received_long`, `period.received_short` are relevant here and these are continuously, globally updated upon any interaction by any user with the system. As a result, those positions will count towards the total collateral until explicitly closed, inflating `funding_received` beyond its true value. - -> and it's not a loss of funds and these will be received by other users. - -The point of this issue is that user's will not receive those lost funds. Since the global `funding_received` included the collateral of positions which were later excluded from receiving their fee, the eventual pro-rata distribution of fees upon closing the position will not be adjusted for this. Thus some portion of fees will remain unclaimed. - +> Hence, this calculation of funding paid and received will be made only when closing or liquidating the position. So, the following is wrong: + +That's not quite correct. Yes, `calc\_fees` is only called upon closing/liquidating the position. But, this only only calculates the user's [pro-rate share of the accrued interest](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/fees.vy\#L265): + +```vyper +def calc(id: uint256, long: bool, collateral: uint256, opened\_at: uint256) -> SumFees: + period: Period = self.query(id, opened\_at) + P\_b : uint256 = self.apply(collateral, period.borrowing\_long) if long else ( + self.apply(collateral, period.borrowing\_short) ) + P\_f : uint256 = self.apply(collateral, period.funding\_long) if long else ( + self.apply(collateral, period.funding\_short) ) + R\_f : uint256 = self.multiply(collateral, period.received\_long) if long else ( + self.multiply(collateral, period.received\_short) ) + + + return SumFees({funding\_paid: P\_f, funding\_received: R\_f, borrowing\_paid: P\_b}) +``` + +The terms `period.received\_long`, `period.received\_short` are relevant here and these are continuously, globally updated upon any interaction by any user with the system. As a result, those positions will count towards the total collateral until explicitly closed, inflating `funding\_received` beyond its true value. + +> and it's not a loss of funds and these will be received by other users. + +The point of this issue is that user's will not receive those lost funds. Since the global `funding\_received` included the collateral of positions which were later excluded from receiving their fee, the eventual pro-rata distribution of fees upon closing the position will not be adjusted for this. Thus some portion of fees will remain unclaimed. + The current approach is problematic with loss of funds and disrupted liquidation functionality. There are more direct ways to achieve the correct balance of funding payments to receipts. **WangSecurity** @@ -2756,62 +2789,62 @@ To finally confirm, the problem here is that these funds are just locked in the **rickkk137** -as I got the main point in this report is funding_received can exceed funding_paid but this isn't correct - - ->It's possible for funding_received by one side of the pool to exceed the funding_paid by the other side, in the case that a position went negative. - -
- -provided PoC - - -```vyper - - def test_funding_received_cannot_exceed_than_funding_paid(core, BTC,mint_token, STX, lp_provider, LP, open, mint, short, positions, owner, pools, fees, oracle, api, long, close): - pools.CORE() == core - fees.CORE() == core - positions.CORE() == core - oracle.API() == api - core.API() == api - mint_token(BTC, btc(1000), lp_provider) - mint_token(BTC, btc(1000), short) - mint_token(STX, d(100000), long) - - - mint_token(STX, d(100_000), lp_provider) - assert BTC.balanceOf(lp_provider) == btc(1000) - assert BTC.balanceOf(short) == btc(1000) - - assert STX.balanceOf(lp_provider) == d(100_000) - - core.fresh("BTC-STX", BTC, STX, LP, sender=owner) - BTC.approve(core.address, btc(1000), sender=lp_provider) - STX.approve(core.address, d(100_000), sender=lp_provider) - mint(BTC, STX, LP, btc(100), d(100_000), price=d(50_000), sender=lp_provider) - - BTC.approve(core.address, 10000000, sender=short) - STX.approve(core.address, d(100000), sender=long) - print("stx_balance:", STX.balanceOf(long) / 1e6) - - open(BTC, STX, True, d(50000), 1, price=d(50_000), sender=long) - open(BTC, STX, False, 10000, 1, price=d(50_000), sender=short) - - - chain.mine(20000) - - position = positions.value(1, ctx(d(50_000)))#funding_fee payer - position2 = positions.value(2, ctx(d(50_000)))#funding_fee receiver - - print("position.remaining_collateral:", position.pnl.remaining / 1e6) - print("position.fees.funding_paid:", position.fees.funding_paid / 1e6) - - print("position.funding_received:", position2.fees.funding_received / 1e6) - - #@>>>>>position.remaining_collateral: 0.0 - #@>>>>>position.fees.funding_paid: 50000.0 - #@>>>>>position.funding_received: 50000.0 -``` +as I got the main point in this report is funding\_received can exceed funding\_paid but this isn't correct + + +>It's possible for funding\_received by one side of the pool to exceed the funding\_paid by the other side, in the case that a position went negative. + +
+ +provided PoC + + +```vyper + + def test\_funding\_received\_cannot\_exceed\_than\_funding\_paid(core, BTC,mint\_token, STX, lp\_provider, LP, open, mint, short, positions, owner, pools, fees, oracle, api, long, close): + pools.CORE() == core + fees.CORE() == core + positions.CORE() == core + oracle.API() == api + core.API() == api + mint\_token(BTC, btc(1000), lp\_provider) + mint\_token(BTC, btc(1000), short) + mint\_token(STX, d(100000), long) + + + mint\_token(STX, d(100\_000), lp\_provider) + assert BTC.balanceOf(lp\_provider) == btc(1000) + assert BTC.balanceOf(short) == btc(1000) + + assert STX.balanceOf(lp\_provider) == d(100\_000) + + core.fresh("BTC-STX", BTC, STX, LP, sender=owner) + BTC.approve(core.address, btc(1000), sender=lp\_provider) + STX.approve(core.address, d(100\_000), sender=lp\_provider) + mint(BTC, STX, LP, btc(100), d(100\_000), price=d(50\_000), sender=lp\_provider) + + BTC.approve(core.address, 10000000, sender=short) + STX.approve(core.address, d(100000), sender=long) + print("stx\_balance:", STX.balanceOf(long) / 1e6) + + open(BTC, STX, True, d(50000), 1, price=d(50\_000), sender=long) + open(BTC, STX, False, 10000, 1, price=d(50\_000), sender=short) + + + chain.mine(20000) + + position = positions.value(1, ctx(d(50\_000)))\#funding\_fee payer + position2 = positions.value(2, ctx(d(50\_000)))\#funding\_fee receiver + + print("position.remaining\_collateral:", position.pnl.remaining / 1e6) + print("position.fees.funding\_paid:", position.fees.funding\_paid / 1e6) + + print("position.funding\_received:", position2.fees.funding\_received / 1e6) + + \#@>>>>>position.remaining\_collateral: 0.0 + \#@>>>>>position.fees.funding\_paid: 50000.0 + \#@>>>>>position.funding\_received: 50000.0 +```
**WangSecurity** @@ -2820,7 +2853,7 @@ Yeah, there isn't a proof it can happen, the problem we are discussing now is th **rickkk137** -In my poc the position finally become negative but funding received is equal to funding paid +In my poc the position finally become negative but funding received is equal to funding paid But funding receiving wouldn't stuck in contracts and will be distributed to later positions **WangSecurity** @@ -2829,93 +2862,93 @@ Thank you for this correction. With this, the escalation will be accepted and th **msheikhattari** -Hi @rickkk137 can you specifically clarify the scenario which the PoC is describing? Because from my understanding, it is not showing the case which I described. The nuance is subtle, allow me to explain - -Let's say there is one position on the long side, two on the short side, and that longs are paying shorts. Now if one of the short positions goes negative, the other will not receive the total funding fees over the period, it will only get its pro-rata share from the time that the (now liquidated) position was still active. - -For comparison, it seems your PoC is showing a single long and short, and you are showing that when the long is liquidated the short should still receive their funding payment. This is a completely different scenario from what is described in this issue. - -To answer your earlier question @WangSecurity - -> To finally confirm, the problem here is that these funds are just locked in the contract, and no one can get them, correct? - +Hi @rickkk137 can you specifically clarify the scenario which the PoC is describing? Because from my understanding, it is not showing the case which I described. The nuance is subtle, allow me to explain + +Let's say there is one position on the long side, two on the short side, and that longs are paying shorts. Now if one of the short positions goes negative, the other will not receive the total funding fees over the period, it will only get its pro-rata share from the time that the (now liquidated) position was still active. + +For comparison, it seems your PoC is showing a single long and short, and you are showing that when the long is liquidated the short should still receive their funding payment. This is a completely different scenario from what is described in this issue. + +To answer your earlier question @WangSecurity + +> To finally confirm, the problem here is that these funds are just locked in the contract, and no one can get them, correct? + Yes, thats correct. That portion of the funding payments is not accessible to any users. **rickkk137** -when a position be penalized and funding_received for that position become zero and base or quote collateral's pool will be decrease -``` -@external -def close(id: uint256, d: Deltas) -> PoolState: -... - base_collateral : self.MATH.eval(ps.base_collateral, d.base_collateral), - quote_collateral : self.MATH.eval(ps.quote_collateral, d.quote_collateral), -``` -its mean other position get more funding_received compared to past because funding_recieved has reserve relation with base or qoute collateral -``` - paid_long_term : uint256 = self.apply(fs.long_collateral, fs.funding_long * new_terms) - @>> received_short_term : uint256 = self.divide(paid_long_term, fs.short_collateral) -``` - +when a position be penalized and funding\_received for that position become zero and base or quote collateral's pool will be decrease +``` +@external +def close(id: uint256, d: Deltas) -> PoolState: +... + base\_collateral : self.MATH.eval(ps.base\_collateral, d.base\_collateral), + quote\_collateral : self.MATH.eval(ps.quote\_collateral, d.quote\_collateral), +``` +its mean other position get more funding\_received compared to past because funding\_recieved has reserve relation with base or qoute collateral +``` + paid\_long\_term : uint256 = self.apply(fs.long\_collateral, fs.funding\_long * new\_terms) + @>> received\_short\_term : uint256 = self.divide(paid\_long\_term, fs.short\_collateral) +``` + **msheikhattari** -The pro rata share of funding received will be correctly adjusted moving forward from liquidation. But the point is that the period.funding_received terms already included the now liquidated collateral, so the other positions do not receive adjusted distributions to account for that. +The pro rata share of funding received will be correctly adjusted moving forward from liquidation. But the point is that the period.funding\_received terms already included the now liquidated collateral, so the other positions do not receive adjusted distributions to account for that. **rickkk137** -``` -def query(id: uint256, opened_at: uint256) -> Period: - """ - Return the total fees due from block `opened_at` to the current block. - """ - fees_i : FeeState = Fees(self).fees_at_block(opened_at, id) - fees_j : FeeState = Fees(self).current_fees(id) - return Period({ - borrowing_long : self.slice(fees_i.borrowing_long_sum, fees_j.borrowing_long_sum), - borrowing_short : self.slice(fees_i.borrowing_short_sum, fees_j.borrowing_short_sum), - funding_long : self.slice(fees_i.funding_long_sum, fees_j.funding_long_sum), - funding_short : self.slice(fees_i.funding_short_sum, fees_j.funding_short_sum), - @>>> received_long : self.slice(fees_i.received_long_sum, fees_j.received_long_sum), - @>>> received_short : self.slice(fees_i.received_short_sum, fees_j.received_short_sum), - }) -both will be updated when liquidable position will be closed +``` +def query(id: uint256, opened\_at: uint256) -> Period: + """ + Return the total fees due from block `opened\_at` to the current block. + """ + fees\_i : FeeState = Fees(self).fees\_at\_block(opened\_at, id) + fees\_j : FeeState = Fees(self).current\_fees(id) + return Period({ + borrowing\_long : self.slice(fees\_i.borrowing\_long\_sum, fees\_j.borrowing\_long\_sum), + borrowing\_short : self.slice(fees\_i.borrowing\_short\_sum, fees\_j.borrowing\_short\_sum), + funding\_long : self.slice(fees\_i.funding\_long\_sum, fees\_j.funding\_long\_sum), + funding\_short : self.slice(fees\_i.funding\_short\_sum, fees\_j.funding\_short\_sum), + @>>> received\_long : self.slice(fees\_i.received\_long\_sum, fees\_j.received\_long\_sum), + @>>> received\_short : self.slice(fees\_i.received\_short\_sum, fees\_j.received\_short\_sum), + }) +both will be updated when liquidable position will be closed **msheikhattari** -That's not quite right. From [`current_fees`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/fees.vy#L137): - -```vyper - paid_short_term : uint256 = self.apply(fs.short_collateral, fs.funding_short * new_terms) - received_long_term : uint256 = self.divide(paid_short_term, fs.long_collateral) - received_long_sum : uint256 = self.extend(fs.received_long_sum, received_long_term, 1) -``` - -So as mentioned in my point above, the collateral at the time of each global fee update is used. When a user later claims his pro-rate share, he will receive his fraction of the total collateral *at the time of the fee update*. Fee updates are performed continuously after each operation, and the total collateral may no longer be representative due to liquidated positions being removed from this sum. However, they were still included in the `received_{short/long}_term`, which is added onto the globally stored `received_{short/long}_sum` - +That's not quite right. From [`current\_fees`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/fees.vy\#L137): + +```vyper + paid\_short\_term : uint256 = self.apply(fs.short\_collateral, fs.funding\_short * new\_terms) + received\_long\_term : uint256 = self.divide(paid\_short\_term, fs.long\_collateral) + received\_long\_sum : uint256 = self.extend(fs.received\_long\_sum, received\_long\_term, 1) +``` + +So as mentioned in my point above, the collateral at the time of each global fee update is used. When a user later claims his pro-rate share, he will receive his fraction of the total collateral *at the time of the fee update*. Fee updates are performed continuously after each operation, and the total collateral may no longer be representative due to liquidated positions being removed from this sum. However, they were still included in the `received\_{short/long}\_term`, which is added onto the globally stored `received\_{short/long}\_sum` + Thus some share of these global fees are not accessible. **WangSecurity** -As I understand it: -The fee state is checked twice in the liquidate/close. Let's take close for example: -1. The state is checked during the [close](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/core.vy#L290) in the positions. It doesn't update the state and calculates the funding_received and funding_paid (well, in fact, it calls `close`, then it calls `value`, which calls `calc_fees`) which gives us 0 funding_received and thus 0 added to the quote_collateral. -2. Then `core::close` updates the Pool and then updates the [fees](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/core.vy#L294). -3. When we update the fees, we use [`FeeState.base_collateral`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/fees.vy#L186) (assuming the scenario with two shorts) when calculating the fees received by shorts for this term. But, the `FeeState.base collateral` is changed only [after calculating `received_short_term/sum`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/fees.vy#L226) - -So even though the closed/liquidated position didn't receive any fees and they should go to another short, the `received_short_term` accounted as there were two shorts opened, and each received their funding fees. - -Hence, when the other short position gets the fees, they will add only a portion from that period, not the full fee. - - -Thus, I agree it should remain valid. However, medium severity should be kept because, in reality, if this situation occurs, the collateral of that closing/liquidatable position would be quite low (lower than `funding_paid`), there would be many positions, so the individual loss would be smaller (in terms of the amount, not %), the period with incorrectly applied fees would be small (given the fact that fees are updated at each operation). Hence, the loss is very limited. Planning to reject the escalation and leave the issue as it is, it will be applied at 10 am UTC. +As I understand it: +The fee state is checked twice in the liquidate/close. Let's take close for example: +1. The state is checked during the [close](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/core.vy\#L290) in the positions. It doesn't update the state and calculates the funding\_received and funding\_paid (well, in fact, it calls `close`, then it calls `value`, which calls `calc\_fees`) which gives us 0 funding\_received and thus 0 added to the quote\_collateral. +2. Then `core::close` updates the Pool and then updates the [fees](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/core.vy\#L294). +3. When we update the fees, we use [`FeeState.base\_collateral`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/fees.vy\#L186) (assuming the scenario with two shorts) when calculating the fees received by shorts for this term. But, the `FeeState.base collateral` is changed only [after calculating `received\_short\_term/sum`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/fees.vy\#L226) + +So even though the closed/liquidated position didn't receive any fees and they should go to another short, the `received\_short\_term` accounted as there were two shorts opened, and each received their funding fees. + +Hence, when the other short position gets the fees, they will add only a portion from that period, not the full fee. + + +Thus, I agree it should remain valid. However, medium severity should be kept because, in reality, if this situation occurs, the collateral of that closing/liquidatable position would be quite low (lower than `funding\_paid`), there would be many positions, so the individual loss would be smaller (in terms of the amount, not \%), the period with incorrectly applied fees would be small (given the fact that fees are updated at each operation). Hence, the loss is very limited. Planning to reject the escalation and leave the issue as it is, it will be applied at 10 am UTC. **WangSecurity** -Result: -Medium +Result: +Medium Unique **sherlock-admin4** @@ -2923,18 +2956,23 @@ Unique Escalations have been resolved successfully! Escalation status: -- [spacegliderrrr](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/83/#issuecomment-2344430632): rejected +- [spacegliderrrr](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/83/\#issuecomment-2344430632): rejected **WangSecurity** -#93 and #18 are duplicates of this issue; they both identify that if the negative position is closed, the funding_received for it reset to 0, but these funding fees are not received by any other user and remain stuck in the contract forever. +\#93 and \#18 are duplicates of this issue; they both identify that if the negative position is closed, the funding\_received for it reset to 0, but these funding fees are not received by any other user and remain stuck in the contract forever. + + # Issue M-8: First depositor could DoS the pool Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/85 +The protocol has acknowledged this issue. + ## Found by bughuntoor + ## Summary First depositor could DoS the pool @@ -2966,30 +3004,28 @@ Manual Review ## Recommendation Add a minimum liquidity requirement. - - ## Discussion **msheikhattari** -Escalate - -> DoS has two separate scores on which it can become an issue: -> 1. The issue causes locking of funds for users for more than a week (overrided to 4 hr) -> 2. The issue impacts the availability of time-sensitive functions (cutoff functions are not considered time-sensitive). If at least one of these are describing the case, the issue can be a Medium. If both apply, the issue can be considered of High severity. Additional constraints related to the issue may decrease its severity accordingly. -Griefing for gas (frontrunning a transaction to fail, even if can be done perpetually) is considered a DoS of a single block, hence only if the function is clearly time-sensitive, it can be a Medium severity issue. - +Escalate + +> DoS has two separate scores on which it can become an issue: +> 1. The issue causes locking of funds for users for more than a week (overrided to 4 hr) +> 2. The issue impacts the availability of time-sensitive functions (cutoff functions are not considered time-sensitive). If at least one of these are describing the case, the issue can be a Medium. If both apply, the issue can be considered of High severity. Additional constraints related to the issue may decrease its severity accordingly. +Griefing for gas (frontrunning a transaction to fail, even if can be done perpetually) is considered a DoS of a single block, hence only if the function is clearly time-sensitive, it can be a Medium severity issue. + Low at best. Per the severity guidelines, this is not DoS since no user funds are locked and no sensitive functionality is impacted (ongoing positions/LPs are not impacted). Additionally, this both assumes that no other LPs make any deposits within the same block as the attacker (as the price would be equivalent), and that the price is monotonically decreasing after the attack was initiated. Not only is it low impact, but also low likelihood. **sherlock-admin3** -> Escalate -> -> > DoS has two separate scores on which it can become an issue: -> > 1. The issue causes locking of funds for users for more than a week (overrided to 4 hr) -> > 2. The issue impacts the availability of time-sensitive functions (cutoff functions are not considered time-sensitive). If at least one of these are describing the case, the issue can be a Medium. If both apply, the issue can be considered of High severity. Additional constraints related to the issue may decrease its severity accordingly. -> Griefing for gas (frontrunning a transaction to fail, even if can be done perpetually) is considered a DoS of a single block, hence only if the function is clearly time-sensitive, it can be a Medium severity issue. -> +> Escalate +> +> > DoS has two separate scores on which it can become an issue: +> > 1. The issue causes locking of funds for users for more than a week (overrided to 4 hr) +> > 2. The issue impacts the availability of time-sensitive functions (cutoff functions are not considered time-sensitive). If at least one of these are describing the case, the issue can be a Medium. If both apply, the issue can be considered of High severity. Additional constraints related to the issue may decrease its severity accordingly. +> Griefing for gas (frontrunning a transaction to fail, even if can be done perpetually) is considered a DoS of a single block, hence only if the function is clearly time-sensitive, it can be a Medium severity issue. +> > Low at best. Per the severity guidelines, this is not DoS since no user funds are locked and no sensitive functionality is impacted (ongoing positions/LPs are not impacted). Additionally, this both assumes that no other LPs make any deposits within the same block as the attacker (as the price would be equivalent), and that the price is monotonically decreasing after the attack was initiated. Not only is it low impact, but also low likelihood. You've created a valid escalation! @@ -3000,14 +3036,14 @@ You may delete or edit your escalation comment anytime before the 48-hour escala **WangSecurity** -While indeed this doesn't qualify the DOS requirements, this issue can still result in the functionality of the protocol being blocked and the users wouldn't be able to use this pool. I agree it's low likelihood, but it's still possible. - +While indeed this doesn't qualify the DOS requirements, this issue can still result in the functionality of the protocol being blocked and the users wouldn't be able to use this pool. I agree it's low likelihood, but it's still possible. + @spacegliderrrr since this issue relies on the precision loss, it requires a POC, can you make one? And also how low should be the price so it leads to rounding down? **spacegliderrrr** -Price doesn't really matter - it's just needed to deposit little enough tokens that they're worth 1 wei of quote token. So if for example the pair is WETH/ USDC, a user would need to deposit ~4e8 wei WETH (considering price of $2,500). - +Price doesn't really matter - it's just needed to deposit little enough tokens that they're worth 1 wei of quote token. So if for example the pair is WETH/ USDC, a user would need to deposit ~4e8 wei WETH (considering price of \$2,500). + As for the PoC, because the code is written in Vyper and Foundry does not support it, I cannot provide a PoC. **WangSecurity** @@ -3016,18 +3052,18 @@ In that case, could you make a very detailed attack path, I see you provided an **WangSecurity** -One important thing to consider is the following question: -> We will report issues where the core protocol functionality is inaccessible for at least 7 days. Would you like to override this value? -Yes, 4 hours - -So, I see that DOS rules say that the funds should be locked for a week. But, the question is about core protocol functionality being inaccessible. The protocol specified they want to have issues about core protocol functionality being inaccessible for 4 hours. - +One important thing to consider is the following question: +> We will report issues where the core protocol functionality is inaccessible for at least 7 days. Would you like to override this value? +Yes, 4 hours + +So, I see that DOS rules say that the funds should be locked for a week. But, the question is about core protocol functionality being inaccessible. The protocol specified they want to have issues about core protocol functionality being inaccessible for 4 hours. + This finding, indeed doesn't lead to locking funds or blocking time-sensitive functions, but it can lead to the core protocol functionality being inaccessible for 4 hours. I see that it's low likelihood, but the likelihood is not considered when defining the severity. Hence, even though for this finding there have to be no other depositors in the next block or the price being lower for the next 4 hours, this can happen. Thus, medium severity for this issue is appropriate. Planning to reject the escalation and leave the issue as it is. **msheikhattari** -This issue doesn't impact ongoing operations, so its similar to frontrunning of initializers. No irreversible damage or loss of funds occur. - +This issue doesn't impact ongoing operations, so its similar to frontrunning of initializers. No irreversible damage or loss of funds occur. + Core protocol functionality being inaccessible should have some ramifications like lock of funds or broken time-sensitive functionality (like withdrawals). No funds are in the pool when this issue is taking place. **msheikhattari** @@ -3036,39 +3072,39 @@ In any case this issue does need a PoC - per the severity criteria all issues re **WangSecurity** -@spacegliderrrr could you make a detailed numerical POC, showcasing the precision loss and DOS. - -> Core protocol functionality being inaccessible should have some ramifications like lock of funds or broken time-sensitive functionality (like withdrawals). No funds are in the pool when this issue is taking place - +@spacegliderrrr could you make a detailed numerical POC, showcasing the precision loss and DOS. + +> Core protocol functionality being inaccessible should have some ramifications like lock of funds or broken time-sensitive functionality (like withdrawals). No funds are in the pool when this issue is taking place + As I've said previously, this still impacts the core protocol functionality, as the users cannot deposit into the pool, and this very well could last for more than 4 hours. Hence, this is sufficient for medium severity as the core protocol functionality is inaccessible for more than 4 hours. **spacegliderrrr** -1. Market pair used is WETH/USDT. Current WETH price $2,500. -2. User is the first depositor in the pool, depositing 4e8 WETH. Based on the lines of code below, the user is minted 1 lp token.: -`amt0 = 4e8 * 2500e18 / 1e18 = 10000e8 = 1e12` -`lowered = 1e12 / 1e12 = 1` -```vyper -@external -@pure -def base_to_quote(tokens: uint256, ctx: Ctx) -> uint256: - lifted : Tokens = self.lift(Tokens({base: tokens, quote: ctx.price}), ctx) - amt0 : uint256 = self.to_amount(lifted.quote, lifted.base, self.one(ctx)) - lowered: Tokens = self.lower(Tokens({base: 0, quote: amt0}), ctx) - return lowered.quote -``` -3. Next block comes and WETH price drops to $2,499. -4. A user now attempts to deposit. Since LP tokens minted are respective to the current pool value, contract calculates pool value, using the same formula above -`amt0 = 4e8 * 2499e18 / 1e18 = 0.9996e12` -`lowered = 0.9996e12 / 1e12 = 0` -5. User's LP tokens are calculated using the following formula. As the pool's value is 0, a division by 0 is attempted, which forces the tx to revert. -```vyper -def f(mv: uint256, pv: uint256, ts: uint256) -> uint256: - if ts == 0: return mv - else : return (mv * ts) / pv -``` -6. Pool is DoS'd until ETH price goes above $2,500 again. - +1. Market pair used is WETH/USDT. Current WETH price \$2,500. +2. User is the first depositor in the pool, depositing 4e8 WETH. Based on the lines of code below, the user is minted 1 lp token.: +`amt0 = 4e8 * 2500e18 / 1e18 = 10000e8 = 1e12` +`lowered = 1e12 / 1e12 = 1` +```vyper +@external +@pure +def base\_to\_quote(tokens: uint256, ctx: Ctx) -> uint256: + lifted : Tokens = self.lift(Tokens({base: tokens, quote: ctx.price}), ctx) + amt0 : uint256 = self.to\_amount(lifted.quote, lifted.base, self.one(ctx)) + lowered: Tokens = self.lower(Tokens({base: 0, quote: amt0}), ctx) + return lowered.quote +``` +3. Next block comes and WETH price drops to \$2,499. +4. A user now attempts to deposit. Since LP tokens minted are respective to the current pool value, contract calculates pool value, using the same formula above +`amt0 = 4e8 * 2499e18 / 1e18 = 0.9996e12` +`lowered = 0.9996e12 / 1e12 = 0` +5. User's LP tokens are calculated using the following formula. As the pool's value is 0, a division by 0 is attempted, which forces the tx to revert. +```vyper +def f(mv: uint256, pv: uint256, ts: uint256) -> uint256: + if ts == 0: return mv + else : return (mv * ts) / pv +``` +6. Pool is DoS'd until ETH price goes above \$2,500 again. + **WangSecurity** @@ -3077,8 +3113,8 @@ As far as I can tell, the POC is correct and this issue will indeed happen. As w **WangSecurity** -Result: -Medium +Result: +Medium Unique **sherlock-admin4** @@ -3086,14 +3122,19 @@ Unique Escalations have been resolved successfully! Escalation status: -- [msheikhattari](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/85/#issuecomment-2345044761): rejected +- [msheikhattari](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/85/\#issuecomment-2345044761): rejected + + # Issue M-9: Whale LP providers can open positions on both sides to force users into high fees. Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/89 +The protocol has acknowledged this issue. + ## Found by bughuntoor + ## Summary Whale LP providers can open positions on both sides to force users into high fees. @@ -3121,38 +3162,36 @@ Manual Review ## Recommendation Consider a different way to calculate fees - - ## Discussion **msheikhattari** -Escalate -Invalid. Quoting a valid point from your own [comment](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/28#issuecomment-2344387728): -> Issue should be low/info. Ultimately, all LPs would want is fees and this would give them the highest fees possible. Furthermore, the attack is extremely costly, as it would require user to lock up hundreds of thousands/ millions, losing a significant % of them. Any user would have an incentive to add liquidity at extremely high APY, which would allow for both new positions opens and LP withdraws. - -This attack inflates borrow fees, but the high APY will attract other LP depositors which would drive the utilization back down to normal levels, reducing the fee. Unlike the issue that you were escalating, this one has no such time sensitivity - the market would naturally tend towards rebalance within the next several days / weeks. It's not reasonable to assume that the existing positions would remain open despite high fees and other LPs would not enter the market over the coming days/weeks. - -Not only that, the other assumptions of this issue are incorrect: -> If at any point they can no longer afford maintaining majority, they can simply close their positions without taking a loss, so this is basically risk-free. - -Wrong. Each opened long AND short position must pay a fixed fee, so the whale is taking a risk. He is betting that the current positions will not close, and his stake will not get diluted, just long enough to eke out a net profit. And this is assuming he had a majority stake to begin with, which for the more liquid pools where the attack is most profitable due to a large amount of open interest, is a highly questionable assumption. - +Escalate +Invalid. Quoting a valid point from your own [comment](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/28\#issuecomment-2344387728): +> Issue should be low/info. Ultimately, all LPs would want is fees and this would give them the highest fees possible. Furthermore, the attack is extremely costly, as it would require user to lock up hundreds of thousands/ millions, losing a significant \% of them. Any user would have an incentive to add liquidity at extremely high APY, which would allow for both new positions opens and LP withdraws. + +This attack inflates borrow fees, but the high APY will attract other LP depositors which would drive the utilization back down to normal levels, reducing the fee. Unlike the issue that you were escalating, this one has no such time sensitivity - the market would naturally tend towards rebalance within the next several days / weeks. It's not reasonable to assume that the existing positions would remain open despite high fees and other LPs would not enter the market over the coming days/weeks. + +Not only that, the other assumptions of this issue are incorrect: +> If at any point they can no longer afford maintaining majority, they can simply close their positions without taking a loss, so this is basically risk-free. + +Wrong. Each opened long AND short position must pay a fixed fee, so the whale is taking a risk. He is betting that the current positions will not close, and his stake will not get diluted, just long enough to eke out a net profit. And this is assuming he had a majority stake to begin with, which for the more liquid pools where the attack is most profitable due to a large amount of open interest, is a highly questionable assumption. + The game theory makes it unlikely that the whale would be able to extract enough extra fees to even make a profit net of the operating fees of such an attack. **sherlock-admin3** -> Escalate -> Invalid. Quoting a valid point from your own [comment](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/28#issuecomment-2344387728): -> > Issue should be low/info. Ultimately, all LPs would want is fees and this would give them the highest fees possible. Furthermore, the attack is extremely costly, as it would require user to lock up hundreds of thousands/ millions, losing a significant % of them. Any user would have an incentive to add liquidity at extremely high APY, which would allow for both new positions opens and LP withdraws. -> -> This attack inflates borrow fees, but the high APY will attract other LP depositors which would drive the utilization back down to normal levels, reducing the fee. Unlike the issue that you were escalating, this one has no such time sensitivity - the market would naturally tend towards rebalance within the next several days / weeks. It's not reasonable to assume that the existing positions would remain open despite high fees and other LPs would not enter the market over the coming days/weeks. -> -> Not only that, the other assumptions of this issue are incorrect: -> > If at any point they can no longer afford maintaining majority, they can simply close their positions without taking a loss, so this is basically risk-free. -> -> Wrong. Each opened long AND short position must pay a fixed fee, so the whale is taking a risk. He is betting that the current positions will not close, and his stake will not get diluted, just long enough to eke out a net profit. And this is assuming he had a majority stake to begin with, which for the more liquid pools where the attack is most profitable due to a large amount of open interest, is a highly questionable assumption. -> +> Escalate +> Invalid. Quoting a valid point from your own [comment](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/28\#issuecomment-2344387728): +> > Issue should be low/info. Ultimately, all LPs would want is fees and this would give them the highest fees possible. Furthermore, the attack is extremely costly, as it would require user to lock up hundreds of thousands/ millions, losing a significant \% of them. Any user would have an incentive to add liquidity at extremely high APY, which would allow for both new positions opens and LP withdraws. +> +> This attack inflates borrow fees, but the high APY will attract other LP depositors which would drive the utilization back down to normal levels, reducing the fee. Unlike the issue that you were escalating, this one has no such time sensitivity - the market would naturally tend towards rebalance within the next several days / weeks. It's not reasonable to assume that the existing positions would remain open despite high fees and other LPs would not enter the market over the coming days/weeks. +> +> Not only that, the other assumptions of this issue are incorrect: +> > If at any point they can no longer afford maintaining majority, they can simply close their positions without taking a loss, so this is basically risk-free. +> +> Wrong. Each opened long AND short position must pay a fixed fee, so the whale is taking a risk. He is betting that the current positions will not close, and his stake will not get diluted, just long enough to eke out a net profit. And this is assuming he had a majority stake to begin with, which for the more liquid pools where the attack is most profitable due to a large amount of open interest, is a highly questionable assumption. +> > The game theory makes it unlikely that the whale would be able to extract enough extra fees to even make a profit net of the operating fees of such an attack. You've created a valid escalation! @@ -3167,16 +3206,16 @@ You may delete or edit your escalation comment anytime before the 48-hour escala **spacegliderrrr** -@WangSecurity Issue above showcases a real issue which could occur if a whale decides to _attack_ a pool. - -> Each opened long AND short position must pay a fixed fee, so the whale is taking a risk. He is betting that the current positions will not close, and his stake will not get diluted, just long enough to eke out a net profit. - -True, there's some risk, though most of it can be mitigated. For example if the attack is performed when most opened positions are at negative PnL (which would mean closing them is profitable to the LP providers), most of the risk is mitigated as users have 2 choices - close early at a loss or keep the position open at high fees (either way, profitable for the LP provider). - -> the market would naturally tend towards rebalance within the next several days / weeks. - -True, though as mentioned, it would take days/ weeks in which the whale could profit. - +@WangSecurity Issue above showcases a real issue which could occur if a whale decides to \_attack\_ a pool. + +> Each opened long AND short position must pay a fixed fee, so the whale is taking a risk. He is betting that the current positions will not close, and his stake will not get diluted, just long enough to eke out a net profit. + +True, there's some risk, though most of it can be mitigated. For example if the attack is performed when most opened positions are at negative PnL (which would mean closing them is profitable to the LP providers), most of the risk is mitigated as users have 2 choices - close early at a loss or keep the position open at high fees (either way, profitable for the LP provider). + +> the market would naturally tend towards rebalance within the next several days / weeks. + +True, though as mentioned, it would take days/ weeks in which the whale could profit. + The issue does involve some game theory, but nonetheless shows an actual risk to honest users. **WangSecurity** @@ -3185,8 +3224,8 @@ I also agree there are lots of risks, with this scenario. But, it's still possib **WangSecurity** -Result: -Medium +Result: +Medium Unique **sherlock-admin4** @@ -3194,7 +3233,9 @@ Unique Escalations have been resolved successfully! Escalation status: -- [msheikhattari](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/89/#issuecomment-2345018904): rejected +- [msheikhattari](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/89/\#issuecomment-2345018904): rejected + + # Issue M-10: User could have impossible to close position if funding fees grow too big. @@ -3202,6 +3243,7 @@ Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/96 ## Found by bughuntoor + ## Summary User could have impossible to close position if funding fees grow too big. @@ -3240,20 +3282,18 @@ Manual Review ## Recommendation Fix is non-trivial. - - ## Discussion **KupiaSecAdmin** -Escalate - +Escalate + The underflow does not happen by nature of `deduct` function. Thus this is invalid. **sherlock-admin3** -> Escalate -> +> Escalate +> > The underflow does not happen by nature of `deduct` function. Thus this is invalid. You've created a valid escalation! @@ -3264,18 +3304,18 @@ You may delete or edit your escalation comment anytime before the 48-hour escala **spacegliderrrr** -Escalate - -Severity should be High. Issue above describes how a user could open risk-free max leverage positions, basically making a guaranteed profit from the LPs. - +Escalate + +Severity should be High. Issue above describes how a user could open risk-free max leverage positions, basically making a guaranteed profit from the LPs. + Regarding @KupiaSecAdmin escalation above - please do double check the issue above. The underflow does not happen in `deduct` but rather in the `MATH.eval` operations. The problem lies within the fact that if order of withdraws is reversed, funding receiver can receive more fees than the total collateral (as long as it is available by other users who have said collateral not yet eaten up by funding fees). Then, some of the funding paying positions will be impossible to be closed. **sherlock-admin3** -> Escalate -> -> Severity should be High. Issue above describes how a user could open risk-free max leverage positions, basically making a guaranteed profit from the LPs. -> +> Escalate +> +> Severity should be High. Issue above describes how a user could open risk-free max leverage positions, basically making a guaranteed profit from the LPs. +> > Regarding @KupiaSecAdmin escalation above - please do double check the issue above. The underflow does not happen in `deduct` but rather in the `MATH.eval` operations. The problem lies within the fact that if order of withdraws is reversed, funding receiver can receive more fees than the total collateral (as long as it is available by other users who have said collateral not yet eaten up by funding fees). Then, some of the funding paying positions will be impossible to be closed. You've created a valid escalation! @@ -3286,71 +3326,71 @@ You may delete or edit your escalation comment anytime before the 48-hour escala **ami0x226** -This is invalid. - -> 4. The original long is closed. This does not have an impact on the total quote collateral, as it is increased by the funding_paid which in our case will be counted as exactly as much as the collateral (as in these calculations it cannot surpass it). And it then subtracts that same quote collateral. - -When original long is closed, total quote collateral is changed. -```rust -File: gl-sherlock\contracts\positions.vy -209: quote_reserves : [self.MATH.PLUS(pos.collateral), #does not need min() -210: self.MATH.MINUS(fees.funding_paid)], -211: quote_collateral: [self.MATH.PLUS(fees.funding_paid), -212: self.MATH.MINUS(pos.collateral)], -``` -Heres, `pos.collateral = X, fees.funding_paid = X + Y`. -Then, -`quote_collateral <- quote_collateral + X + Y - X = quote_collateral + Y = 2X + Y`, and -`quote_reserves <- quote_reserves + X - X - Y = quote_reserves - Y`. - -When original short is closed in step5, new total quote collateral is `2X + Y - (X + Y) = X` and there is no underflow in step6. -As a result, the scenario of the report is wrong. -The loss causes in `quote_reserves`, but, in practice, Y is enough small by the frequent liquidation and it should be assumed that the liquidation is done correctly. +This is invalid. + +> 4. The original long is closed. This does not have an impact on the total quote collateral, as it is increased by the funding\_paid which in our case will be counted as exactly as much as the collateral (as in these calculations it cannot surpass it). And it then subtracts that same quote collateral. + +When original long is closed, total quote collateral is changed. +```rust +File: gl-sherlock\contracts\positions.vy +209: quote\_reserves : [self.MATH.PLUS(pos.collateral), \#does not need min() +210: self.MATH.MINUS(fees.funding\_paid)], +211: quote\_collateral: [self.MATH.PLUS(fees.funding\_paid), +212: self.MATH.MINUS(pos.collateral)], +``` +Heres, `pos.collateral = X, fees.funding\_paid = X + Y`. +Then, +`quote\_collateral <- quote\_collateral + X + Y - X = quote\_collateral + Y = 2X + Y`, and +`quote\_reserves <- quote\_reserves + X - X - Y = quote\_reserves - Y`. + +When original short is closed in step5, new total quote collateral is `2X + Y - (X + Y) = X` and there is no underflow in step6. +As a result, the scenario of the report is wrong. +The loss causes in `quote\_reserves`, but, in practice, Y is enough small by the frequent liquidation and it should be assumed that the liquidation is done correctly. Especially, because the report does not mention about this vulnerability, I think this is invalid **ami0x226** -Also, Funding paid cannot exceed collateral of a position from the `apply` function. -```vyper -File: gl-sherlock\contracts\math.vy -167: def apply(x: uint256, numerator: uint256) -> Fee: -172: fee : uint256 = (x * numerator) / DENOM -173: remaining: uint256 = x - fee if fee <= x else 0 -174: fee_ : uint256 = fee if fee <= x else x -175: return Fee({x: x, fee: fee_, remaining: remaining}) -``` - -```vyper -File: gl-sherlock\contracts\fees.vy -265: def calc(id: uint256, long: bool, collateral: uint256, opened_at: uint256) -> SumFees: -269: P_f : uint256 = self.apply(collateral, period.funding_long) if long else ( -270: self.apply(collateral, period.funding_short) ) -274: return SumFees({funding_paid: P_f, funding_received: R_f, borrowing_paid: P_b}) +Also, Funding paid cannot exceed collateral of a position from the `apply` function. +```vyper +File: gl-sherlock\contracts\math.vy +167: def apply(x: uint256, numerator: uint256) -> Fee: +172: fee : uint256 = (x * numerator) / DENOM +173: remaining: uint256 = x - fee if fee <= x else 0 +174: fee\_ : uint256 = fee if fee <= x else x +175: return Fee({x: x, fee: fee\_, remaining: remaining}) +``` + +```vyper +File: gl-sherlock\contracts\fees.vy +265: def calc(id: uint256, long: bool, collateral: uint256, opened\_at: uint256) -> SumFees: +269: P\_f : uint256 = self.apply(collateral, period.funding\_long) if long else ( +270: self.apply(collateral, period.funding\_short) ) +274: return SumFees({funding\_paid: P\_f, funding\_received: R\_f, borrowing\_paid: P\_b}) ``` **spacegliderrrr** -> Heres, pos.collateral = X, fees.funding_paid = X + Y. - -Here's where you're wrong. When the user closes their position, `funding_paid` cannot exceed `pos.collateral`. So `fees.funding_paid == pos.collateral` when closing the original long. Please re-read the issue and code again. +> Heres, pos.collateral = X, fees.funding\_paid = X + Y. + +Here's where you're wrong. When the user closes their position, `funding\_paid` cannot exceed `pos.collateral`. So `fees.funding\_paid == pos.collateral` when closing the original long. Please re-read the issue and code again. **ami0x226** -> > Heres, pos.collateral = X, fees.funding_paid = X + Y. -> -> Here's where you're wrong. When the user closes their position, `funding_paid` cannot exceed `pos.collateral`. So `fees.funding_paid == pos.collateral` when closing the original long. Please re-read the issue and code again. - -That's true. I mentioned about it in the [above comment](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/96#issuecomment-2345445382) -> Also, Funding paid cannot exceed collateral of a position from the apply function. - -I just use `fees.funding_paid = X + Y` to follow the step2 of `bughuntoor`'s scenario: -> 2. Eventually the funding fee grows larger than the whole long position (X + Y). it is due liquidation, but due to bot failure is not yet liquidated (which based on comments is expected and possible behaviour) +> > Heres, pos.collateral = X, fees.funding\_paid = X + Y. +> +> Here's where you're wrong. When the user closes their position, `funding\_paid` cannot exceed `pos.collateral`. So `fees.funding\_paid == pos.collateral` when closing the original long. Please re-read the issue and code again. + +That's true. I mentioned about it in the [above comment](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/96\#issuecomment-2345445382) +> Also, Funding paid cannot exceed collateral of a position from the apply function. + +I just use `fees.funding\_paid = X + Y` to follow the step2 of `bughuntoor`'s scenario: +> 2. Eventually the funding fee grows larger than the whole long position (X + Y). it is due liquidation, but due to bot failure is not yet liquidated (which based on comments is expected and possible behaviour) **rickkk137** -invalid -funding_paid cannot exceed than collateral also funding_received cannot be greater funding_paid +invalid +funding\_paid cannot exceed than collateral also funding\_received cannot be greater funding\_paid **WangSecurity** @@ -3358,202 +3398,202 @@ funding_paid cannot exceed than collateral also funding_received cannot be great **WangSecurity** -We've got the POC from the sponsor: - -
-POC - -```python -from ape import chain -import pytest -from conftest import d, ctx - -# https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/96 - -# 1) There's an open long (for total collateral of X) and an open short position. Long position pays funding fee to the short position. -# 2) Eventually the funding fee grows larger than the whole long position (X + Y). it is due liquidation, but due to bot failure is not yet liquidated (which based on comments is expected and possible behaviour) -# 3) A new user opens a new long position, once with X collateral. (total quote collateral is currently 2X) -# 4) The original long is closed. This does not have an impact on the total quote collateral, as it is increased by the funding_paid which in our case will be counted as exactly as much as the collateral (as in these calculations it cannot surpass it). And it then subtracts that same quote collateral. -# 5) The original short is closed. funding_received is calculated as X + Y and therefore that's the amount the total quote collateral is reduced by. The new total quote collateral is 2X - (X + Y) = X - Y. -# 6) Later when the user attempts to close their position it will fail as it will attempt subtracting (X - Y) - X which will underflow. - -# OR -# 1. Consider the original long and short positions, long pays funding fee to short. -# 2. Time goes by, liquidator bots fails and funding fee makes 100% of the long collateral (consider collateral is X) -# 3. Another long position is opened again with collateral X. -# 4. Time goes by, equivalent to funding fee of 10%. Total collateral at this moment is still 2X, so the new total funding paid is 1.2X -# 5. Short position closes and receives funding paid of 1.2X. Quote collateral is now reduced from 2X to 0.8X. -# 6. (Optional) Original long closes position. For them funding paid is capped at their collateral, so funding paid == collateral, so closing does not make a difference on the quote collateral. -# 7. The next long position holder tries to close. They're unable because their collateral is 1x, funding paid is 0.1x. Collateral calculation is 0.8X + 0.1X - 1X and underflow reverts - -PARAMS = { - 'MIN_FEE' : 1_00000000, - 'MAX_FEE' : 1_00000000, # 10%/block - 'PROTOCOL_FEE' : 1000, - 'LIQUIDATION_FEE' : 2, - 'MIN_LONG_COLLATERAL' : 1, - 'MAX_LONG_COLLATERAL' : 1_000_000_000, - 'MIN_SHORT_COLLATERAL' : 1, - 'MAX_SHORT_COLLATERAL' : 1_000_000_000, - 'MIN_LONG_LEVERAGE' : 1, - 'MAX_LONG_LEVERAGE' : 10, - 'MIN_SHORT_LEVERAGE' : 1, - 'MAX_SHORT_LEVERAGE' : 10, - 'LIQUIDATION_THRESHOLD' : 1, -} - -PRICE = 400_000 - -def test_issue(core, api, pools, positions, fees, math, oracle, params, - VEL, STX, LP, - mint, burn, open, close, - long, short, lp_provider, long2, owner, - mint_token): - - # setup - core.fresh("VEL-STX", VEL, STX, LP, sender=owner) - mint_token(VEL, d(100_000), lp_provider) - mint_token(STX, d(100_000), lp_provider) - mint_token(VEL, d(10_000) , long) - mint_token(STX, d(10_000) , long) - mint_token(VEL, d(10_000) , long2) - mint_token(STX, d(10_000) , long2) - mint_token(VEL, d(10_000) , short) - mint_token(STX, d(10_000) , short) - VEL.approve(core.address, d(100_000), sender=lp_provider) - STX.approve(core.address, d(100_000), sender=lp_provider) - VEL.approve(core.address, d(10_000) , sender=long) - STX.approve(core.address, d(10_000) , sender=long) - VEL.approve(core.address, d(10_000) , sender=long2) - STX.approve(core.address, d(10_000) , sender=long2) - VEL.approve(core.address, d(10_000) , sender=short) - STX.approve(core.address, d(10_000) , sender=short) - mint(VEL, STX, LP, d(10_000), d(4_000), price=PRICE, sender=lp_provider) - params.set_params(PARAMS, sender=owner) # set 10 % fee / block - - START_BLOCK = chain.blocks[-1].number - print(f"Start block: {START_BLOCK}") - - # 1) There's an open long (for total collateral of X) and an open short position. Long position pays funding fee to the short position. - # open pays funding when long utilization > short utilization (interest/reserves) - X = d(100) - p1 = open(VEL, STX, True , X , 10, price=PRICE, sender=long) - p2 = open(VEL, STX, False , d(5), 2, price=PRICE, sender=short) - assert not p1.failed, "open long" - assert not p2.failed, "open short" - - fees = params.dynamic_fees(pools.lookup(1)) - print(f"Pool fees: {fees}") - - # 2. Time goes by, liquidator bots fails and funding fee makes 100% of - # the long collateral (consider collateral is X) - chain.mine(10) - - # fees/value after - value = positions.value(1, ctx(PRICE)) - print(value['fees']) - print(value['pnl']) - assert value['fees']['funding_paid'] == 99900000 - assert value['fees']['funding_paid_want'] == 99900000 - assert value['fees']['borrowing_paid'] == 0 - # assert value['fees']['borrowing_paid_want'] == 99900000 - assert value['pnl']['remaining'] == 0 - - value = positions.value(2, ctx(PRICE)) - print(value['fees']) - print(value['pnl']) - assert value['fees']['funding_paid'] == 0 - assert value['fees']['funding_received'] == 99900000 - assert value['fees']['funding_received_want'] == 99900000 - # assert value['fees']['borrowing_paid'] == 4995000 - assert value['pnl']['remaining'] == 4995000 - - # 3. Another long position is opened again with collateral X. - p3 = open(VEL, STX, True, X, 10, price=PRICE, sender=long2) - assert not p3.failed - - # 4. Time goes by, equivalent to funding fee of 10%. - # Total collateral at this moment is still 2X, so the new total funding paid is 1.2X - chain.mine(1) - - print(f"Pool: {pools.lookup(1)}") - - value = positions.value(1, ctx(PRICE)) - print(f"Long 1: {value['fees']}") - print(f"Long 1: {value['pnl']}") - - value = positions.value(3, ctx(PRICE)) - print(f"Long 2: {value['fees']}") - print(f"Long 2: {value['pnl']}") - - assert value['fees']['funding_paid'] == 9990000 #TODO: value with mine(2) is high? - assert value['pnl']['remaining'] == 89910000 - - print(f"Blocks: {chain.blocks[-1].number - START_BLOCK}") - - # 5. Short position closes and receives funding paid of 1.2X. Quote collateral is now reduced from 2X to 0.8X. - tx = close(VEL, STX, 2, price=PRICE, sender=short) - print(core.Close.from_receipt(tx)[0]['value']) - # fees: [0, 0, 139860000, 139860000, 0, 0, 4995000] - assert not tx.failed - - print(f"Blocks: {chain.blocks[-1].number - START_BLOCK}") - - pool = pools.lookup(1) - print(f"Pool: {pool}") - # assert pool['quote_collateral'] == 9990000 * 0.8 #59940000 - value = positions.value(3, ctx(PRICE)) - print(f"Long 2: {value['fees']}") - print(f"Long 2: {value['pnl']}") - print(f"Long 2: {value['deltas']}") - - # 7. The next long position holder tries to close. They're unable because their collateral is 1x, funding paid is 0.1x. - # Collateral calculation is 0.8X + 0.1X - 1X and underflow reverts - tx = close(VEL, STX, 3, price=PRICE, sender=long2) - print(core.Close.from_receipt(tx)[0]['value']) - assert not tx.failed, "close 2nd long" - - # 6. (Optional) Original long closes position. For them funding paid is capped at their collateral, - # so funding paid == collateral, so closing does not make a difference on the quote collateral. - tx = close(VEL, STX, 1, price=PRICE, sender=long) - print(core.Close.from_receipt(tx)[0]['value']) - assert not tx.failed, "close original" - - print(f"Blocks: {chain.blocks[-1].number - START_BLOCK}") - - pool = pools.lookup(1) - print(f"Pool: {pool}") - - - assert False -``` - -
- - -Hence, the issue is indeed valid. About the severity, as I understand, it's indeed high, since there are no extensive limitations, IIUC. Anyone is free to correct me and the POC, but from my perspective it's indeed correct. - +We've got the POC from the sponsor: + +
+POC + +```python +from ape import chain +import pytest +from conftest import d, ctx + +\# https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/96 + +\# 1) There's an open long (for total collateral of X) and an open short position. Long position pays funding fee to the short position. +\# 2) Eventually the funding fee grows larger than the whole long position (X + Y). it is due liquidation, but due to bot failure is not yet liquidated (which based on comments is expected and possible behaviour) +\# 3) A new user opens a new long position, once with X collateral. (total quote collateral is currently 2X) +\# 4) The original long is closed. This does not have an impact on the total quote collateral, as it is increased by the funding\_paid which in our case will be counted as exactly as much as the collateral (as in these calculations it cannot surpass it). And it then subtracts that same quote collateral. +\# 5) The original short is closed. funding\_received is calculated as X + Y and therefore that's the amount the total quote collateral is reduced by. The new total quote collateral is 2X - (X + Y) = X - Y. +\# 6) Later when the user attempts to close their position it will fail as it will attempt subtracting (X - Y) - X which will underflow. + +\# OR +\# 1. Consider the original long and short positions, long pays funding fee to short. +\# 2. Time goes by, liquidator bots fails and funding fee makes 100\% of the long collateral (consider collateral is X) +\# 3. Another long position is opened again with collateral X. +\# 4. Time goes by, equivalent to funding fee of 10\%. Total collateral at this moment is still 2X, so the new total funding paid is 1.2X +\# 5. Short position closes and receives funding paid of 1.2X. Quote collateral is now reduced from 2X to 0.8X. +\# 6. (Optional) Original long closes position. For them funding paid is capped at their collateral, so funding paid == collateral, so closing does not make a difference on the quote collateral. +\# 7. The next long position holder tries to close. They're unable because their collateral is 1x, funding paid is 0.1x. Collateral calculation is 0.8X + 0.1X - 1X and underflow reverts + +PARAMS = { + 'MIN\_FEE' : 1\_00000000, + 'MAX\_FEE' : 1\_00000000, \# 10\%/block + 'PROTOCOL\_FEE' : 1000, + 'LIQUIDATION\_FEE' : 2, + 'MIN\_LONG\_COLLATERAL' : 1, + 'MAX\_LONG\_COLLATERAL' : 1\_000\_000\_000, + 'MIN\_SHORT\_COLLATERAL' : 1, + 'MAX\_SHORT\_COLLATERAL' : 1\_000\_000\_000, + 'MIN\_LONG\_LEVERAGE' : 1, + 'MAX\_LONG\_LEVERAGE' : 10, + 'MIN\_SHORT\_LEVERAGE' : 1, + 'MAX\_SHORT\_LEVERAGE' : 10, + 'LIQUIDATION\_THRESHOLD' : 1, +} + +PRICE = 400\_000 + +def test\_issue(core, api, pools, positions, fees, math, oracle, params, + VEL, STX, LP, + mint, burn, open, close, + long, short, lp\_provider, long2, owner, + mint\_token): + + \# setup + core.fresh("VEL-STX", VEL, STX, LP, sender=owner) + mint\_token(VEL, d(100\_000), lp\_provider) + mint\_token(STX, d(100\_000), lp\_provider) + mint\_token(VEL, d(10\_000) , long) + mint\_token(STX, d(10\_000) , long) + mint\_token(VEL, d(10\_000) , long2) + mint\_token(STX, d(10\_000) , long2) + mint\_token(VEL, d(10\_000) , short) + mint\_token(STX, d(10\_000) , short) + VEL.approve(core.address, d(100\_000), sender=lp\_provider) + STX.approve(core.address, d(100\_000), sender=lp\_provider) + VEL.approve(core.address, d(10\_000) , sender=long) + STX.approve(core.address, d(10\_000) , sender=long) + VEL.approve(core.address, d(10\_000) , sender=long2) + STX.approve(core.address, d(10\_000) , sender=long2) + VEL.approve(core.address, d(10\_000) , sender=short) + STX.approve(core.address, d(10\_000) , sender=short) + mint(VEL, STX, LP, d(10\_000), d(4\_000), price=PRICE, sender=lp\_provider) + params.set\_params(PARAMS, sender=owner) \# set 10 \% fee / block + + START\_BLOCK = chain.blocks[-1].number + print(f"Start block: {START\_BLOCK}") + + \# 1) There's an open long (for total collateral of X) and an open short position. Long position pays funding fee to the short position. + \# open pays funding when long utilization > short utilization (interest/reserves) + X = d(100) + p1 = open(VEL, STX, True , X , 10, price=PRICE, sender=long) + p2 = open(VEL, STX, False , d(5), 2, price=PRICE, sender=short) + assert not p1.failed, "open long" + assert not p2.failed, "open short" + + fees = params.dynamic\_fees(pools.lookup(1)) + print(f"Pool fees: {fees}") + + \# 2. Time goes by, liquidator bots fails and funding fee makes 100\% of + \# the long collateral (consider collateral is X) + chain.mine(10) + + \# fees/value after + value = positions.value(1, ctx(PRICE)) + print(value['fees']) + print(value['pnl']) + assert value['fees']['funding\_paid'] == 99900000 + assert value['fees']['funding\_paid\_want'] == 99900000 + assert value['fees']['borrowing\_paid'] == 0 + \# assert value['fees']['borrowing\_paid\_want'] == 99900000 + assert value['pnl']['remaining'] == 0 + + value = positions.value(2, ctx(PRICE)) + print(value['fees']) + print(value['pnl']) + assert value['fees']['funding\_paid'] == 0 + assert value['fees']['funding\_received'] == 99900000 + assert value['fees']['funding\_received\_want'] == 99900000 + \# assert value['fees']['borrowing\_paid'] == 4995000 + assert value['pnl']['remaining'] == 4995000 + + \# 3. Another long position is opened again with collateral X. + p3 = open(VEL, STX, True, X, 10, price=PRICE, sender=long2) + assert not p3.failed + + \# 4. Time goes by, equivalent to funding fee of 10\%. + \# Total collateral at this moment is still 2X, so the new total funding paid is 1.2X + chain.mine(1) + + print(f"Pool: {pools.lookup(1)}") + + value = positions.value(1, ctx(PRICE)) + print(f"Long 1: {value['fees']}") + print(f"Long 1: {value['pnl']}") + + value = positions.value(3, ctx(PRICE)) + print(f"Long 2: {value['fees']}") + print(f"Long 2: {value['pnl']}") + + assert value['fees']['funding\_paid'] == 9990000 \#TODO: value with mine(2) is high? + assert value['pnl']['remaining'] == 89910000 + + print(f"Blocks: {chain.blocks[-1].number - START\_BLOCK}") + + \# 5. Short position closes and receives funding paid of 1.2X. Quote collateral is now reduced from 2X to 0.8X. + tx = close(VEL, STX, 2, price=PRICE, sender=short) + print(core.Close.from\_receipt(tx)[0]['value']) + \# fees: [0, 0, 139860000, 139860000, 0, 0, 4995000] + assert not tx.failed + + print(f"Blocks: {chain.blocks[-1].number - START\_BLOCK}") + + pool = pools.lookup(1) + print(f"Pool: {pool}") + \# assert pool['quote\_collateral'] == 9990000 * 0.8 \#59940000 + value = positions.value(3, ctx(PRICE)) + print(f"Long 2: {value['fees']}") + print(f"Long 2: {value['pnl']}") + print(f"Long 2: {value['deltas']}") + + \# 7. The next long position holder tries to close. They're unable because their collateral is 1x, funding paid is 0.1x. + \# Collateral calculation is 0.8X + 0.1X - 1X and underflow reverts + tx = close(VEL, STX, 3, price=PRICE, sender=long2) + print(core.Close.from\_receipt(tx)[0]['value']) + assert not tx.failed, "close 2nd long" + + \# 6. (Optional) Original long closes position. For them funding paid is capped at their collateral, + \# so funding paid == collateral, so closing does not make a difference on the quote collateral. + tx = close(VEL, STX, 1, price=PRICE, sender=long) + print(core.Close.from\_receipt(tx)[0]['value']) + assert not tx.failed, "close original" + + print(f"Blocks: {chain.blocks[-1].number - START\_BLOCK}") + + pool = pools.lookup(1) + print(f"Pool: {pool}") + + + assert False +``` + +
+ + +Hence, the issue is indeed valid. About the severity, as I understand, it's indeed high, since there are no extensive limitations, IIUC. Anyone is free to correct me and the POC, but from my perspective it's indeed correct. + But for now, planning to reject @KupiaSecAdmin escalation, accept @spacegliderrrr escalation and upgrade severity to high. **KupiaSecAdmin** -@WangSecurity - No problem with having this issue as valid but the severity should be Medium at most I think. +@WangSecurity - No problem with having this issue as valid but the severity should be Medium at most I think. Because, 1) Usually in real-world, funding fees don't exceed the collateral. 2) When positions get into risk, liquidation bots will work most of times. **WangSecurity** -As far as I understand, this issue can be triggered intentionally, i.e. the first constraint can be reached intentionally, as explained at the end of the Vulnerability Detail section: -> as a user could abuse it to create a max leverage position which cannot be closed - - But you're correct that it depends on the liquidation bot malfunctioning, which is also mentioned in the report: -> Eventually the funding fee grows larger than the whole long position (X + Y). it is due liquidation, but due to bot failure is not yet liquidated (which based on comments is expected and possible behaviour) - +As far as I understand, this issue can be triggered intentionally, i.e. the first constraint can be reached intentionally, as explained at the end of the Vulnerability Detail section: +> as a user could abuse it to create a max leverage position which cannot be closed + + But you're correct that it depends on the liquidation bot malfunctioning, which is also mentioned in the report: +> Eventually the funding fee grows larger than the whole long position (X + Y). it is due liquidation, but due to bot failure is not yet liquidated (which based on comments is expected and possible behaviour) + I agree that this is indeed an external limitation. Planning to reject both escalations and leave the issue as it is. **WangSecurity** -Result: -Medium +Result: +Medium Has duplicates **sherlock-admin3** @@ -3561,6 +3601,14 @@ Has duplicates Escalations have been resolved successfully! Escalation status: -- [KupiaSecAdmin](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/96/#issuecomment-2344287615): rejected -- [spacegliderrrr](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/96/#issuecomment-2344359663): rejected +- [KupiaSecAdmin](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/96/\#issuecomment-2344287615): rejected +- [spacegliderrrr](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/96/\#issuecomment-2344359663): rejected + +**sherlock-admin2** + +The protocol team fixed this issue in the following PRs/commits: +https://github.com/Velar-co/gl-sherlock/pull/2 + + +