Introduction
Since the genesis of dYdX Chain on October 26, 2023, dYdX Chain has generated $232B in trading volume and distributed $39M in USDC to dYdX Chain validators and stakers.
On August 12, 2024, dYdX Trading announced the dYdX Unlimited software featuring significant improvements to the dYdX Chain open-source software, such as permissionless market listings, MegaVault, the potential for revenue sharing, and an Affiliate Program, among other things.
Despite product achievements, the DYDX token has been facing challenges due to high inflation and a significant amount of token unlocks. We believe that dYdX Unlimited presents a unique opportunity to optimize dYdX Chain and DYDX tokenomics for the protocolâs long-term viability and success. Our proposals aim to (1) improve liquidity on dYdX Chain markets, (2) increase the attractiveness of the DYDX token, and (3) encourage holding and staking DYDX, all with a view to increasing the security of the dYdX Chain and driving sustainable growth in the dYdX ecosystem.
To achieve this goal, we recommend implementing revenue sharing, enhancing DYDX token utility, and reducing DYDX emissions, all while supporting the immediate growth requirements of the dYdX Chain. Implementing these changes will significantly enhance dYdXâs competitiveness and reduce DYDX inflation. This will turn the DYDX token into a more robust asset, therefore benefiting the overall security and resiliency of the network.
We published our Analysis and Proposals on dYdX Chain and DYDX Tokenomics on the dYdX Forum on 22nd October 2024 which has garnered constructive participation and discussions.
Our extensive analysis of the proposal can be found in our research report.
Based on the communityâs feedback, we are splitting up the proposals based on subject matter. The four separate proposals (and threads) are as follows:
Reduce the Trading Rewards âCâ constant from 0.90 to 0.5.
Protocol Revenue Distribution:
a. 50% of all protocol revenue routed to the MegaVault.
b. 10% of all protocol revenue routed to the Treasury subDAO.
c. Recommendation that above an $80M level of annual protocol revenue, the Treasury subDAO could consider a Buy & Stake program.
Reduce the active set from 60 to 30 validators.
Cease support for the wethDYDX Smart Contract (i.e., the Bridge) on the dYdX Chain side.
Summary of Recommendations
Reduce the active set from 60 to 30 validators.
Reduce the active set from 60 to 30 validators
The current active validator set on the dYdX Chain consists of 60 validators. At the current fee level, over 17 active validators are unprofitable after accounting for a fixed infrastructure cost of $1,000 per month at their respective commission rates. The proposed MegaVault and dYdX Treasury subDAO revenue shares would further reduce staking yields and, therefore, the funding validators receive from their respective commission, exacerbating unprofitability.
Of the total revenue, 50% is allocated to MegaVault and 10% to the Treasury subDAO. At current commission rates, in a set of 60 validators, 34 would incur losses, while in a 45-validator set, 19 would remain unprofitable. Meanwhile, increasing validator commission to boost profitability would lower staking APR. This could potentially encourage unbonding while discouraging staking. The equilibrium bonded ratio to achieve the current staking APR would decline and thus, impact dYdX Chain security.
We proposed reducing the active validator set from 60 to 30, ensuring that all validators in the active set have a chance to remain profitable after accounting for MegaVault and Treasury subDAO revenue shares at current commission rates. We have conducted a thorough analysis of the proposals. For more details, please refer to our research report.
Network performance and reducing the active set from 60 to 30 validators
We appreciate that validators could decide to operate at a loss with the hope that volume and revenue increase on dYdX Chain, but we think it is in the best interest of the dYdX ecosystem to reduce the active set.
The proposed reduction of the active validator set from 60 to 30 takes into account the impact on validator profitability, assuming the approval of a 50% revenue share towards MegaVault and 10% towards the Treasury subDAO. At the time of the analysis, 17 validators were unprofitable with 100% of protocol revenue directed to stakers and validators, assuming a fixed infrastructure cost of $1,000/month (which, again, we acknowledge may not be representative in all cases but, in our opinion, serves as a good proxy). With the proposed revenue share, over 34 (of 60) validators could face unprofitability, accounting for 57% of the active validators. This concern has been raised by comments suggesting that some validators have been operating at a loss since the genesis of dYdX Chain. Validator financial health is important to the chainâs security and overall success.
We do not think there are significant impacts on validator network decentralisation from reducing the active set by 50%. The top 30 validators have been delegated 86% of the total stake, with the top 4 accounting for 36%. The report notes that the Nakamoto Coefficient would remain at 4, and the HHI would increase slightly to 552 if the active set reduction is implemented. We strongly advocate for decentralization, and recommend the community adopt a delegation policy framework that encourages a more even distribution of stake, particularly supporting validators with lower delegations. Additionally, we propose reassessing the active validator set in the future, potentially increasing it once dYdX Chain achieves sufficient unincentivized volume. This should occur alongside the stabilization of DYDX price and the bootstrapping of new initiatives, which could allow for a reduction in the revenue share allocated towards them.
This proposal emphasizes the importance of maintaining a lean, agile, financially healthy validator set to ensure dYdX Chainâs security, leaving the final decision to the community.
Additionally, reducing the active validator set could improve network performance. We have not done any technical analysis, but we assume that decreasing the active set will result in less network traffic.
Community feedback indicated strong interest in exploring potential latency and performance improvements from reducing the validator set.
We appreciate all the feedback and acknowledge that our initial research focused on validator profitability, which was an issue raised by validators while discussing the Treasury subDAO proposal . Itâs clear thereâs also interest in discussing the potential latency improvements from reducing the active validator set from 60 to 30 on the dYdX Chain. Beyond profitability, we believe this reduction could drive meaningful improvements to the speed and reliability of dYdX Chain for the following reasons:
Network Efficiency: A larger validator set requires messages (like new orders) to travel farther across the network, often in multiple steps to reach all validators. This can slow down the network and introduce unpredictability. By reducing the validator count, messages can take more direct routes, minimising delays and enhancing both speed and reliability.
Faster Order Processing: For a competitive trading platform, rapid order processing is essential. Fewer validators mean fewer steps for orders to reach the proposer, which results in faster and more consistent processing times, improving the overall user experience to more closely match centralised exchanges (CEXs).
Faster Consensus: Fewer validators means fewer number of prevotes/precommits required for consensus to reach finality.
Node Location: Several validators have suggested that prioritising validators operating within Japan would be the most effective way to reduce latency. While we agree that this could improve latency, validator node location should not be seen as an alternative to the current proposal. Rather, validator node location and reducing the active set should be seen as two levers that the dYdX community could leverage to improve speed and reliability of dYdX Chain. We strongly encourage all dYdX Chain validators to align with the MEV committeeâs guidelines and urge ecosystem participants to consider node location when choosing validators to stake to.
We believe a smaller validator set would enable a faster, high-quality trading experience, and maintain sufficient decentralization for the security of the dYdX Chain, while improving the financial health of the chain. Given that our proposal does not include the technical analysis around latency and focuses on the economics of the chain, we invite any technical teams without a direct stake in this proposal to share their perspectives and insights.
This was followed by valuable community input, highlighted below.
About the Validator Set
Reducing the number of validators to 30 could alleviate the load on consensus mechanisms, thereby enhancing the UX for traders. Hyperliquid achieves its superior UX by operating with just 4 validators. Currently, there is virtually no demand from traders for greater decentralization, so this suggestion is intriguing in terms of improving product quality.
However, itâs also important to recognize that this perspective largely aligns with the interests of the top 30 validators, so the community should consider this proposal with a balanced and careful approach.
Secondly about @eguegu mention of consensus. dYdX already recommended Tokyo as the infra location for best latency and hence best trader experience so if you really want to improve trader experience you should eliminate all the validators not based on Tokyo, not the âsmaller 30 validatorsâ. Also your comment âthere is virtually no demand from traders for greater decentralizationâ, of course users donât care about the underlying tech, it was dYdX team that decided to decentralized all components of dYdX including the orderbook and matching engine (first DEX to achieve this) and thatâs why they migrated from v3 to v4
Validator Set Reduction
Iâm very strongly in favor of reducing the validator set, but not for the reasons proposed here. Thereâs been a lot of commentary here about validator profitability, even by the proposers.
To me, this is less of an issue of profitability (as noted in the report, on most chains validators seem happy to run unprofitably) and more of a performance issue. dYdX is a trading platform that relies heavily on optimizing trade execution speed and low-latency. The biggest bottleneck on this right now is the time that it takes for validators to come to consensus.
If traders lose value because of latency, they wonât trade here. Reducing the set to 30 will reduce instances of this occurring. It truly is that simple.
Further, and as Iâve repeated on numerous occasions, large validator set sizes do not have a meaningful impact on overall decentralization .
Note: In the event that this does pass, Stride will also have to adjust its delegation program for dYdX and move delegations around. Weâre already thinking through what this could look like and expect to have a plan out in the coming weeks (subject to this passing).
Validator Set Reduction
As we mentioned in the analysis, we found the reasoning behind the profitability to be very weak, and the only reason we could identify was improving the user experience. If the project deems it necessary to enhance the user experience, then this is certainly a very appropriate measure. However, the downside is the increased centralization, which we agree may not be significant, as well as the impact on governance, since it will also reduce participation in forums like this by half, along with contributions from these token holders. Another additional impact is the loss of these stakeholders, as it is very likely that they will sell the tokens they had purchased to engage in the activity
Improving execution with networking performance
Through our work with the MEV Committee, we have repeatedly identified issues of inefficient order gossiping resulting in orderbook discrepancies . Orders submitted by traders donât reach the relevant block proposer in time, resulting in worse pricing, slower execution, or both. This isnât due to any malicious activity, but a result of networking problems.
Itâs worth noting that practically every validator has, at some point in time, displayed issues with discrepancy. Even under optimal configurations , validators display discrepancies. This leads us to believe the network simply isnât efficient enough to process trading demand at the expected performance level.
By halving the validator count, we effectively halve the network topology for an order to reach the next block proposer. We reduce peering requirements, allowing for more direct routes among validators and trading nodes. Theoretically, the reduction should allow orders to gossip more efficiently through a leaner network, improving discrepancies.
As far as I know, the number of validators at launch was chosen arbitrarily. There is no research or justification for why 60 is an optimal number of active validators. Instead, weâre learning from experience now that 60 is probably a sub-optimal amount. Why should we stick with a number that isnât working? Reducing this number is a reasonable experiment to improve execution, making dYdX a more competitive trading venue. Thankfully, this is something we can continue to adjust or revert over time based on future learnings.
Obviously, as mentioned in the thread, we risk losing a number of high quality validator teams contributing to the protocol. Itâs not an easy decision to make, but dYdX has to perform better on execution if it wants to compete with other venues.
Reduction of the Validator Set
I canât directly speak to the profitability of validators, and whether or not validators want to run unprofitably on networks. However, I can speak to the importance of having a highly performant chain.
Traders, particularly quantiative traders, have a ~limitless appetite for transaction throughput. On venues where they are offered the chance, traders will spend millions to optimize throughput on the order of nanoseconds (e.g. NYSE). dYdX contributors and validators have done a truly spectacular job at optimizing the dYdX chain - block times are down to ~1.1s running a fully decentralized network. This is done through many technical optimizations on the chain-level, as well as social coordination among validators to optimize the geographic placement of their nodes.
However, the reality is that other decentralized perp markets, like Hyperliquid, operate in a pseudo-centralized manner with few nodes that are tightly colocated. This allows them to give extremely strong guarantees to traders around trade execution, which drives much more liquidity and volume to the venue.
Iâve heard from multiple liquidity providers that dYdX performance is still below other markets. Unfortunately, dYdX is still compared to fully centralized venues like Binance, Coinbase, etc, as well as psuedo-centralized venues like Hyperliquid. Having even 1-5% of transactions fail or not get processed in time is enough to convince HFT firms to stop supporting dYdX.
I view this as a critical issue for dYdX, and should be the protocolâs top goal. Market makers offboarding from dYdX would be catastrophic.
The long-term viability of dYdX is fully dependent on attracting the most competitive and efficient liquidity providers. I fully believe that reducing the validator set can only improve the performance of the chain.
As other contributors have mentioned, validators already mostly colocate their nodes, and we would achieve large gains by improving the technical setup of a few underperforming validators. I fully agree with this, and I believe any delegations done by the dYdX protocol or the Treasury SubDAO should only delegate to highly performant validators. I donât disagree with this view at all, and also view the underperformance of large validators as a critical issue.
However, itâs certainly true that fewer validators will lead to faster blocks. Fewer validators means less gossipping on the p2p layer, and fewer nodes needed to reach 2/3 consensus. Even in a world where all validators are located in the same room, fewer nodes will still directly lead to faster blocks.
Carl highlights some empirical data about this here - it is a fact that the current dYdX network has performance issues. We need to increase the tx throughput of the network, or we risk losing meaningful liquidity.
I am aware that this reduction would be painful, but I do strongly believe itâs necessary for the long-term competitiveness of dYdX. I believe that one of dYdXâs core goals should be optimizing tx throughput for liquidity providers, otherwise all validators will be negatively affected in the long-term.
We are have plenty of scope to optimise block times further as is with 60 validators based close together like DYDX recommends
Check out Injective, they have 60 geographically distributed validators across the world amf are still able to achieve 0.6s block times
Happy to share the config changes necessary to achieve this.
Latency: it is necessary to analyze how it is possible to reduce latency and prove with technical data the number of validators, location and quality, as well as what (if any) technical alternatives are necessary to achieve this objective; It is equally important, for the sake of transparency, to prove to the community what the effective arguments are for reducing the number of validators. While reducing latency of the DYDX chain and higher profitability (per validator) corresponding to a smaller group of validators are justifiable objectives (always keeping latency reduction as the main objective). On the other hand, a possible reduction in the group of validators with the sole objective of increasing the profitability of DYDXâs largest current valuers to the detriment of the rest is a purely and simply predatory positioning of large validators in relation to smaller ones. I insist that this issue must be clarified without any doubt within the community.
Hi dYdX Community!
Iâm wondering why we donât reduce the validator set gradually, for instance, by decreasing it from 60 to 50 and reducing the block time from 1 second to 0.6 seconds to see how much the network speed of the dYdX chain improves. If we suddenly reduce from 60 to 30 validators without lowering the block time, would it actually be faster? Do you have any in-depth reports on how much speed would realistically change? We would like to have an in-depth discussion about your statement that reducing the validator set would increase speed by a certain percentage. The options are to reduce the block time to 500 ms while keeping 60 validators, or to keep the block time at 1 second and reduce the validator set to 30.
â That being said, there are valid reasons to adopt a smaller validator set and they should be considered in their own merit. This is why I think the community should start a separate discussion about increasing the chainâs performance as there are alternative solutions worth exploring (mentioned above) before halving the Set.
Better performance and low latency should be a priority but itâs important to remember that weâll always lag behind CEXs and âsemi-decentralizedâ DEXs as decentralization ( dYdXâs competitive advantage) comes with its trade-offs.
Hey all, Mag from Skip here.
I understand that reducing the validator set would have harmful effects - namely validators would get cut out of the active set. Iâm not here to comment on that (it makes me sad), only to comment on the technical side after years of working deep within the Comet codebase.
Reducing the number of validators would definitely speed up dYdX. There is no honest technical argument to believe otherwise. Centralized systems are the fastest, and as a network approaches a centralized system, it will speed up. The reality is that the gossip factor of Comet-based data (e.g. txs, votes, blockparts) is extremely high, so any additional data gets flooded around the network multiple times. With dYdX especially, which uses Skip:Connect (and therefore circulates heavy Vote Extension packets), there is a ton of data on the gossip network. The more validators there are, the more gossip there is, as each validator regossips the data it receives. This can throttle a network by preventing votes, proposals, and blockparts from reaching the selected proposer, who will effectively wait for the network to catch up.
Cutting down the number of validators will reduce the amount of traffic in the gossip network, will reduce the number of validators required to vote on committing blocks/proposals, and will concentrate stake into the remaining validators that are not cut. This will improve network latency in a couple ways:
On average, there will be a lower minimum number of validators from which votes are required to reach the 2/3 threshold, and so blocks will progress faster
If the validators are colocated, the latency between each peer will be much smaller, effectively increasing the gossip throughput of the network.
There will be significantly less data in the gossip network overall.
There is significantly decreased chance that the proposer selected is a non-functioning validator (the risk of which increases as there are more validators)
Now, there are many other ways to reduce the latency / block time of a network, that are currently being worked on by the Informal team (and the Skip team) - including mempool performance improvements, reducing the size of vote extensions, direct peering amongst validators, etc. These are not ready, but will be helpful to achieve the same thing. This does not change the fact that decreasing the number of validators and colocating them will definitively lower blocktimes and latency - itâs just a technical reality.
Lastly, we sympathise with a lot of the Validators that are affected by the reduction in the Validator set and think it wouldâve been better to frame this reduction with the purpose of increasing performance as no one is forcing a Validator to join the network and itâs their decision to operate at a loss. If we are truly wanting to improve performance (which i think should be the number 1 priority) then the 50% cut likely makes sense, however, if itâs about validator profitability it could then make sense to initially reduce it down to 45 and assess if profitability improves