the second post here is mainly just a summary of the written report, not a response to many of the points raised here by awesome experts like @Kam@RoboMcGobo@usurper@shellvish and so forth.
I would appreciate if your team can take lead on reacting to feedback here and steering the proposal towards something most of the community can support. Forum posts are for iteration, not doubling down on the initial idea with no alterations.
A good way to do this is simply quoting users, it is a lot of work, but does make sure every comment is considered and implemented as a change if need be.
We would like to address the additional questions posed by the community. We appreciate all the constructive conversations that are ongoing.
Based on community feedback we are planning to split up the proposals based on subject matter. We will be following up with four separate proposals (and threads) as follows:
Reduce the Trading Rewards âCâ constant from 0.90 to 0.5.
Protocol Revenue Distribution:
a. 50% of all protocol revenue routed to the MegaVault.
b. 10% of all protocol revenue routed to the Treasury subDAO.
c. Recommendation that above an $80M level of annual protocol revenue, the Treasury subDAO could consider a Buy & Stake program.
Reduce the active set from 60 to 30 validators.
Cease support for the wethDYDX Smart Contract (i.e., the Bridge) on the dYdX Chain side.
The dYdX community sought further clarity on the goals and direction of the proposed changes. As mentioned in our proposal, it aims to improve liquidity on dYdX Chain markets, increase the attractiveness of the DYDX token, and encourage holding and staking DYDX, all to increase the security of the dYdX Chain and drive sustainable growth in the dYdX ecosystem. The recommended proposals are linked to these said goals:
Improve liquidity on dYdX Chain markets:
a. Route 50% of all protocol revenue to the MegaVault
Increase the attractiveness of the DYDX token
a. Increase utility through a Buy & Stake program
b. Reducing inflation of circulating supply through trading rewards reduction and 10% allocation of protocol revenue towards Treasury subDAO
Maintaining a lean, agile, and financially healthy chain
a. Ceasing support for the wethDYDX Smart Contract (i.e., the Bridge) on the dYdX Chain side.
b. Reducing the active set from 60 to 30 validators.
The proposals in this research report seek to support the DYDX tokenâs financial value to preserve capital while ensuring long-term value accrual, all with a view to improving the dYdX Chain networkâs long-term health and security.
MegaVault is a core component of dYdX Unlimited and aims to improve liquidity on dYdX Chainâs long-tail markets. It is important to support and bootstrap the product which depends on the TVL it attracts.
MegaVault TVL is paramount for the success of dYdX Unlimited and dYdX Chain in general as it will increase liquidity, especially on long-tail assets.
Looking at a 24-hour period on the 25th of October we can quickly observe how dYdX volume/liquidity ratio per token reduces its performance as we move down from highly liquid assets to less liquid assets on the platform.
dYdX fares very well compared to Binance in majors and very liquid altcoins like PEPE or SEI. Liquidity is significantly worse on long-tail markets like POPCAT or STX for instance.
The same token was doing $300M and $14M volume respectively for Binance and Hyperliquid, with Hyperliquid offering 2bps slippage for orders larger than $100k.
We propose to allocate 50% of the protocolâs revenue towards MegaVault to quickly attract TVL to Megavault. We acknowledge that the 50% revenue share will have an impact on the staking APR, but we view this as a key investment for the long-term success of dYdX. Also, the revenue share should be reassessed a few months after launch once statistics around its performance are available. It is important to note that the dYdX community may increase or decrease the 50% Megavault revenue share at any time via a governance proposal.
We appreciate all the interesting suggestions around providing liquidity to the MegaVault. The current design of the MegaVault as mentioned in the Deep Dive: MegaVault blog post, only accepts USDC. This makes it infeasible to implement suggestions around locking DYDX token, in the short term due to software design limitations, placing the idea beyond the scope of the current proposal.
We encourage the community to have further discussions around MegaVaultâs design which can be evaluated by the community and the dYdX team as potential upgrades to the software for the future.
We appreciate all the feedback and acknowledge that our initial research focused on validator profitability, which was an issue raised by validators while discussing the Treasury subDAO proposal. Itâs clear thereâs also interest in discussing the potential latency improvements from reducing the active validator set from 60 to 30 on the dYdX Chain. Beyond profitability, we believe this reduction could drive meaningful improvements to the speed and reliability of dYdX Chain for the following reasons:
Network Efficiency: A larger validator set requires messages (like new orders) to travel farther across the network, often in multiple steps to reach all validators. This can slow down the network and introduce unpredictability. By reducing the validator count, messages can take more direct routes, minimising delays and enhancing both speed and reliability.
Faster Order Processing: For a competitive trading platform, rapid order processing is essential. Fewer validators mean fewer steps for orders to reach the proposer, which results in faster and more consistent processing times, improving the overall user experience to more closely match centralised exchanges (CEXs).
Faster Consensus: Fewer validators means fewer number of prevotes/precommits required for consensus to reach finality.
Node Location: Several validators have suggested that prioritising validators operating within Japan would be the most effective way to reduce latency. While we agree that this could improve latency, validator node location should not be seen as an alternative to the current proposal. Rather, validator node location and reducing the active set should be seen as two levers that the dYdX community could leverage to improve speed and reliability of dYdX Chain. We strongly encourage all dYdX Chain validators to align with the MEV committeeâs guidelines and urge ecosystem participants to consider node location when choosing validators to stake to.
We believe a smaller validator set would enable a faster, high-quality trading experience, and maintain sufficient decentralization for the security of the dYdX Chain, while improving the financial health of the chain. Given that our proposal does not include the technical analysis around latency and focuses on the economics of the chain, we invite any technical teams without a direct stake in this proposal to share their perspectives and insights.
To more closely match CEXs, then why dYdX is not a CEX? I donât think the goal of dYdX is to be a CEX or offer the same as CEXs? CEXs offer those advantages because they are centralised and you have to trust them, if this is the perfect solution then why do we have DeFi to start with? Just use a TradFi centralized CEX, right? The goal of dYdX is via tech improvements trying to offer a similar experience as CEXs while being decentralized, not going towards more centralization, you eliminate 30 validators, then to improve further 15 and so on, and in the end you want to have one validator running dYdX basically being a CEX?
This is not suggested by âseveral validatorsâ this is suggested by dYdX Trading since the genesis launch of dYdX v4 because most dYdX traders are based around that area.
Yes, it should. With this simple example I hope you can finally understand it:
-Assuming the bottom 30 validators are running from Tokyo, while most of the top 30 are running far from Tokyo, if you eliminate the bottom 30 validators you will make latency and trader experience in dYdX v4 much worse
-Assuming the bottom 30 validators are running from Tokyo, while most of the top 30 are running far from Tokyo, if you eliminate the top 30 validators you will make latency and trader experience in dYdX v4 much better
These go together. Reducing validators not running from Tokyo is the optimal action to maximise dYdX latency and trader experience. Going even further, encouraging all validators to run the infra from Tokyo and keeping the 60 validators running in Tokyo will have better latency and trader experience results that arbitrarily removing the bottom 30 validators with many of these running from Tokyo
This is a good message to Stride, Karpatkey, dYdX Foundation and others, to have this as a main delegation criteria, validators running top infra from Tokyo.
How do you define âsufficientâ decentralization? Here again you are missing an important point. You talk about decentralization of stake and security, you say that 30 validators is sufficient for decentralization and security, where is the limit of sufficient? Why 30 is sufficient, why 3 or 5 is not sufficient following your reasoning? Also, you talk about security, but the security depends also on the value of the tokens staked. The point you keep missing is the decentralization of the orderbook, doubling the centralization of the orderbook is sufficient decentralization still according to you, based on what data, research, analysis you come to the conclusion of what is sufficient or not, or is it just an opinion that you think 30 is sufficient because it looks like a nice number?
What financial health improvement? The revenue would still be the same with 60, 30, or 2 validators, the only difference is whether this revenue goes to 60, 30 or 2 validators, the only improvement is for the remaining validators that will get all the revenue of the eliminated validators
For dYdX trading initially it was a tradeoff of decentralization and latency,
and 60 validators seemed like a good number. This indicates that dYdX places
a high value on decentralization and they found the best trade off of decentralization and latency. If they didnât care about decentralizatoin they would have stayed on Starknet or launch dYdX v4 with a few validators controlled by them, like Hyperliquid. So ideally, we should try to keep 60 validators for decentralization while improving latency via having most validators running from Tokyo and editing the config file and studying other Cosmos chains to improve latency the most possible while still keeping the initial decentralization with 60 validators
So why donât we reduce the 60 validators to just 4, like Hyperliquid, which has only 4 validators and is now looking to expand its validator set to ensure decentralization?
Dear Autostake Team!
My point is that smaller chains tend to think about reducing their validator set, while larger chains like BNB Chain are continuously expanding their validator setâfrom 21 validators to the current 45. A validator set of 60, like dYdXâs, is actually small compared to other app-chains. dYdX is a blue-chip project, a leader in decentralized derivatives, so reducing the validator set from 60 to 30 simply because validators arenât profitable is unreasonable. Companies are responsible for managing their own revenue, so why interfere in each companyâs finances? There are many ways to reduce latency for dYdX, and we are ready to support the dYdX team in improving it, ensuring that we achieve latency improvements while keeping the validator set as decentralized as it currently is. Autostake team, are you ready to support the dYdX chain in reducing latency?
Dear Autostake Team!
If a smaller validator set is desired, why not choose 21 validators like EOS and BNB Chain did initially? However, large chains usually expand their validator set to ensure decentralization. We are young contributors who want to see dYdX grow even stronger in the future. Many young engineers, including ourselves, want to contribute even more to the dYdX chain, and we are ready to work with you to make this happen.
I think the best solution would be to change the validator configs as proposed by Autostake, convince all validators to move to Tokyo, and then observe the latency and block time. The social aspect of reducing the active set is very sensitive; dydx hasnât been able to build a user community for various reasons, and now the validator community is at risk.
By the way, no representatives from HFT companies have commented on the matter. I believe they have valuable data that could be discussed. @Callen_Wintermute@Jordi@Carlos_Raven@e4-d Maybe you can add something to this discussion?
Hey everyone, Montagu from Citadel One, a DYDX validator.
We share many of the concerns voiced out by community members regarding this proposal. I wanted to share a few inputs regarding some of proposed changes, separately.
Revenue Distribution
This part is suggesting a 50-40-10 revenue split between MegaVault, Stakers and Treasury subDAO, respectively:
â Allocating a part of revenue to the Treasury subDAO is logical as it reduces relying on DYDX tokens to fund future development.
â Allocating half of the protocolâs revenue to MegaVault seems excessive at this point. The feature isnât battle-tested and seems to be unprofitable at the moment. There is, without a doubt, benefits to having deeper liquidity but I find it hard to justify allocating 50% of the protocolâs revenue to it from the get-go. This should start at a much lower percentage and increase it accordingly, not the other way around.
â A quote from the report: âThe dYdX ecosystem currently lacks a âbuy and stakeâ mechanismâ
Unless Iâm mistaken, that is inaccurate. The community staking deal with Stride does specifically that. That being said, Iâm not opposed but perhaps better ideas could be explored such as deepening DYDX liquidity given that this will be actively managed by the subDAO.
Halvening the Validator Set
This part is suggesting the reduction of the Validator set from 60 to 30. The proposalâs motivation is the profitability of Validators. Based on my understanding, this rationale here is to make up for the losses incurred by Validators following a 60% decrease in revenue.
Disclaimer: Citadel One would be directly affected by this change so take this with a grain of salt.
There are many inconsistencies here that Iâd like to point out:
â Iâd be curious to know more about the figures used to benchmark profitability and how were they sourced. I doubt they represent a majority of the Val Set.
â This will only work if stake from decommissioned validators is distributed pro-rata amongst the Top 30, which is the best-case scenario and unlikely. Whatâs more likely is more concentration of VP which will further hurt the profitability of the remaining Validator, ending up with the same problem the proposerâs trying to solve.
â Validating is permissionless, meaning that validators are free to opt out whenever they want ( example, when theyâre not profitable). Imo, there is no need for a third party, with inaccurate data, to make that decision for them.
â That being said, there are valid reasons to adopt a smaller validator set and they should be considered in their own merit. This is why I think the community should start a separate discussion about increasing the chainâs performance as there are alternative solutions worth exploring (mentioned above) before halving the Set.
Better performance and low latency should be a priority but itâs important to remember that weâll always lag behind CEXs and âsemi-decentralizedâ DEXs as decentralization ( dYdXâs competitive advantage) comes with its trade-offs.
TL;DR The proposerâs rationale is flawed and backed with inaccurate analysis.
Barring the 10% of revenue routed to the Treasury subDAO and halting the bridge, all proposed changes here needs to be further discussed, SEPARATELY. This isnât to say that some of changes could prove beneficial to the chain but an approach of âif it didnât work weâll reassessâ isnât the way to go here.
Dear RealVovochka !
This solution can be implemented immediately: all 60 validators can be located in Tokyo, and we can observe the latency and block time of the dYdX chain. I believe this solution can be done instantly, and we will then make a final decision on whether to reduce the validator set from 60 to 30 after this survey is conducted.
I understand that reducing the validator set would have harmful effects - namely validators would get cut out of the active set. Iâm not here to comment on that (it makes me sad), only to comment on the technical side after years of working deep within the Comet codebase.
Reducing the number of validators would definitely speed up dYdX. There is no honest technical argument to believe otherwise. Centralized systems are the fastest, and as a network approaches a centralized system, it will speed up. The reality is that the gossip factor of Comet-based data (e.g. txs, votes, blockparts) is extremely high, so any additional data gets flooded around the network multiple times. With dYdX especially, which uses Skip:Connect (and therefore circulates heavy Vote Extension packets), there is a ton of data on the gossip network. The more validators there are, the more gossip there is, as each validator regossips the data it receives. This can throttle a network by preventing votes, proposals, and blockparts from reaching the selected proposer, who will effectively wait for the network to catch up.
Cutting down the number of validators will reduce the amount of traffic in the gossip network, will reduce the number of validators required to vote on committing blocks/proposals, and will concentrate stake into the remaining validators that are not cut. This will improve network latency in a couple ways:
On average, there will be a lower minimum number of validators from which votes are required to reach the 2/3 threshold, and so blocks will progress faster
If the validators are colocated, the latency between each peer will be much smaller, effectively increasing the gossip throughput of the network.
There will be significantly less data in the gossip network overall.
There is significantly decreased chance that the proposer selected is a non-functioning validator (the risk of which increases as there are more validators)
Now, there are many other ways to reduce the latency / block time of a network, that are currently being worked on by the Informal team (and the Skip team) - including mempool performance improvements, reducing the size of vote extensions, direct peering amongst validators, etc. These are not ready, but will be helpful to achieve the same thing. This does not change the fact that decreasing the number of validators and colocating them will definitively lower blocktimes and latency - itâs just a technical reality.
To clarify - I donât like the fact that decreasing the number of validators would centralize the network further, and also damage the businesses of the many incredible hardware teams working on keeping dYdX secure.
I was asked to comment purely on the technical thinking around validator set size and block time / chain performance.
The main point was missed though. The maximum improvement of latency would be reducing the number of validators AND specifically the validators not based in Tokyo.
@nethermind suggested removing the bottom 30 validators not because of any latency improvements but based solely on incorrect data about profitability, they used some random $1k value of costs to justify the profitability and removing validators. So you cannot now use the recommendation of Nethermind based on innacurate profitability data to remove the bottom 30 validators as the best also to improve latency, it just makes no sense.
We need to identify first the validators not running from Tokyo and if the goal is really to improve latency the maximum eliminate all these validators, it is sad yes but they chose not to run from Tokyo despite recommendations. Again, you want to eliminate validators to improve latency ok but which validators?
-Those running far from Tokyo which empirically negatively affect latency and removing them will have the best possible improvement of latency?
-Or the bottom 30 validators because Nethermind suggested it, not because of latency improvements, but because of profitability concerns based on inaccurate and arbitrarily chosen costs to estimate profitability?
Directionally we agree with the proposed changes and think they will be beneficial to the dYdX Chain in the long term
We are effectively shifting protocol revenue towards more productive and positive actions. Building up the MegaVault should be highly productive for the protocol, however, I think its profitability should be closely monitored. There is no point in directing 50% of protocol revenue to MegaVault if itâs largely unprofitable
Lastly, we sympathise with a lot of the Validators that are affected by the reduction in the Validator set and think it wouldâve been better to frame this reduction with the purpose of increasing performance as no one is forcing a Validator to join the network and itâs their decision to operate at a loss. If we are truly wanting to improve performance (which i think should be the number 1 priority) then the 50% cut likely makes sense, however, if itâs about validator profitability it could then make sense to initially reduce it down to 45 and assess if profitability improves
If the geolocation of dYdX validators was very globally decentralized as well as the institutional users of dYdX you could then talk about randomly choosing 30 validators to eliminate, which could be the bottom 30. However, and this is key, the majority of institutional users of dYdX are near the Tokyo area as well as a good number of validators. In this situation to improve performance you CANâT just randomly pick 30 validators to eliminate like the bottom 30, you must identify which validators are based far from Tokyo which in dYdX are around 30-40% of validators and eliminate those selectively if what we want really is to improve performance
The thing is, the author of the report @nethermind used an arbitrary assumed infra cost of $1k/month to justify that so many validators are running at a loss but this assumption is totally wrong and shows a very poor understanding of validator costs. There are three main types of servers: bare metal which you can rent at data centers, colocate in data center or run in your own data center, then you have Dedicate virtual servers which are virtual servers but not shared but still less performance than baremetal, and then Virtual private servers which are virtual servers shared and have the worse performance. For dYdX specifically since many are based in Tokyo it is important to highlight that bandwidth in Japan is very expensive, now to the analysis.
Of course if you go with AWS, GCP and similar you will pay insane high costs for the most simple and lowest specs servers. Also, given the block time of dYdX letâs assume virtual private servers or virtual dedicate servers wouldnât be the best for performance but these would be the following options:
Without giving names, you can find providers of VPS and VDS for monthly costs much lower than the suggested $1k, in fact you can find very powerful VDS for less than $100. Moreover, you can find top bare metal servers in great data centers in Japan for around $100-300. If we add unlimited bandwidth in Japan some providers offer this for around $300, so you could have a top bare metal server with unlimited bandwidth in Japan for less than $500. But still a top virtual dedicated server with very high bandwidth and specs could be found around $100-200 in total, very far from the suggested $1000. Of course if some go to AWS, GCP or the most expensive providers and pay $3000-$5000 for the same services thatâs their choice. But donât imply that every dYdX chain validators has $1K infra costs per month. We use a top provider with bare metal server, unlimited bandwidth in Japan, we are consistently in the top 10 dYdX validators by best uptime and lowest MEV and we donât pay monthly anywhere close to $1k for the infra costs, it is more like half of that to get the best possible performance, but for other validators with lower specs or not unlimited bandwidth they could easily have just around $200-300 monthly costs. So your whole analysis of validator profitability and suggestion to remove 30 validators based on that flawed analysis is just totally incorrect and inaccurate.
From the report: âThe profitability calculation below assumes similar fixed infrastructure costs of $1,000/month and the staking APR net of emissions (inflation) has been considered. We acknowledge that infrastructure costs could be higher or lower, but use $1,000/month for simplicity.â
You use $1000 âfor simplicityâ, great argument. You didnât conduct any research or survey to estimate how accurate is this $1k. The costs would be much higher yes like $5k or more and the whole dYdX active set would be unprofitable if all go to the most expensive providers possible. Companies donât open factories in Switzerland with the higest costs and labor costs in the world, but they choose low cost locations. Similarly, the majority of validators donât go to the most expensive providers for the same services, maybe Polychain or similar validators yes, but the great majority no. Which means most dYdX validators have costs much lower than $1k, not $1k or higher than $1k, therefore your profitability analysis is totally wrong and if it hadnât been discussed and corrected your flawed analysis could have caused great damage to dYdX by eliminating many top contributing entities based on flawed assumptions
I think we can completely remove the cost of running from the arguments. No one is forcing anyone to operate a validator at a loss; itâs a business decision for each company. This is an open market, and using this as an excuse to reduce the number of validators is somewhat hypocritical. I would also say that even with the current VP distribution, dydx has decentralization issues. Why is no one questioning the fact that the top 1 and top 2 are a single entity, which is easy to trace through blockchain transactions?
There havenât been any changes in revenue sharing yet, but some are already anticipating changes and have undelegated 10M tokens. We must be careful with that changes
dYdX is too slow (which wasnt a painpoint ever mentioned before) == Reduce # of validators.
Strong level of mental gymnastics going on here.
Thanks @nethermind for commenting on some of the feedback directly with quotes, can we also expect you to react to some of the alternate ideas for revenue and token incentives that would still give the same end result of boosting the megavault and making dYdX more sustainable?
like
and
and
etc.
Lastly, i want to make the point that it should also be evaluated whether the USDC yield can be converted to dYdX through a buy program and only then distributed to vaults and users. This impact on liquidity might be larger than increasing the simple dollar denominated yield. There are many DeFi projects that do distribution in this way and see a large percentage of their stakers re-lock these tokens, taking sell pressure from the market in aggregate.
best,
Ertemann
Lavender.Five
PS. We can take the loss of being removed from the set, If it it happens so be it. I am just very opposed to the way of discussion on this and previous dYdX forum posts. Feedback is to be internalised, I will keep promoting that for a fair governance process.