Analysis and Proposals on dYdX Chain and DYDX Tokenomics

For dYdX trading initially it was a tradeoff of decentralization and latency,
and 60 validators seemed like a good number. This indicates that dYdX places
a high value on decentralization and they found the best trade off of decentralization and latency. If they didn’t care about decentralizatoin they would have stayed on Starknet or launch dYdX v4 with a few validators controlled by them, like Hyperliquid. So ideally, we should try to keep 60 validators for decentralization while improving latency via having most validators running from Tokyo and editing the config file and studying other Cosmos chains to improve latency the most possible while still keeping the initial decentralization with 60 validators

2 Likes

So why don’t we reduce the 60 validators to just 4, like Hyperliquid, which has only 4 validators and is now looking to expand its validator set to ensure decentralization?

1 Like

It’s worth noting that HyperLiquid is decentralising and adding a robust validator set as we speak.

Wouldn’t it be kind of embarrassing if DYDX had a smaller validator set than HyperLiquid (That would be a massive fumble)

Twitter would really run with this, there would be no way back from that kind of worsening of sentiment - especially when it was unnecessary…

Dear Autostake Team!
My point is that smaller chains tend to think about reducing their validator set, while larger chains like BNB Chain are continuously expanding their validator set—from 21 validators to the current 45. A validator set of 60, like dYdX’s, is actually small compared to other app-chains. dYdX is a blue-chip project, a leader in decentralized derivatives, so reducing the validator set from 60 to 30 simply because validators aren’t profitable is unreasonable. Companies are responsible for managing their own revenue, so why interfere in each company’s finances? There are many ways to reduce latency for dYdX, and we are ready to support the dYdX team in improving it, ensuring that we achieve latency improvements while keeping the validator set as decentralized as it currently is. Autostake team, are you ready to support the dYdX chain in reducing latency?

Dear Autostake Team!
If a smaller validator set is desired, why not choose 21 validators like EOS and BNB Chain did initially? However, large chains usually expand their validator set to ensure decentralization. We are young contributors who want to see dYdX grow even stronger in the future. Many young engineers, including ourselves, want to contribute even more to the dYdX chain, and we are ready to work with you to make this happen.

I think the best solution would be to change the validator configs as proposed by Autostake, convince all validators to move to Tokyo, and then observe the latency and block time. The social aspect of reducing the active set is very sensitive; dydx hasn’t been able to build a user community for various reasons, and now the validator community is at risk.
By the way, no representatives from HFT companies have commented on the matter. I believe they have valuable data that could be discussed. @Callen_Wintermute @Jordi @Carlos_Raven @e4-d Maybe you can add something to this discussion?

1 Like

Hey everyone, Montagu from Citadel One, a DYDX validator.

We share many of the concerns voiced out by community members regarding this proposal. I wanted to share a few inputs regarding some of proposed changes, separately.

  • Revenue Distribution

    This part is suggesting a 50-40-10 revenue split between MegaVault, Stakers and Treasury subDAO, respectively:

    → Allocating a part of revenue to the Treasury subDAO is logical as it reduces relying on DYDX tokens to fund future development.

    → Allocating half of the protocol’s revenue to MegaVault seems excessive at this point. The feature isn’t battle-tested and seems to be unprofitable at the moment. There is, without a doubt, benefits to having deeper liquidity but I find it hard to justify allocating 50% of the protocol’s revenue to it from the get-go. This should start at a much lower percentage and increase it accordingly, not the other way around.

    → A quote from the report: ‘The dYdX ecosystem currently lacks a “buy and stake” mechanism’
    Unless I’m mistaken, that is inaccurate. The community staking deal with Stride does specifically that. That being said, I’m not opposed but perhaps better ideas could be explored such as deepening DYDX liquidity given that this will be actively managed by the subDAO.

  • Halvening the Validator Set

    This part is suggesting the reduction of the Validator set from 60 to 30. The proposal’s motivation is the profitability of Validators. Based on my understanding, this rationale here is to make up for the losses incurred by Validators following a 60% decrease in revenue.

    Disclaimer: Citadel One would be directly affected by this change so take this with a grain of salt.

    There are many inconsistencies here that I’d like to point out:

    → I’d be curious to know more about the figures used to benchmark profitability and how were they sourced. I doubt they represent a majority of the Val Set.

    → This will only work if stake from decommissioned validators is distributed pro-rata amongst the Top 30, which is the best-case scenario and unlikely. What’s more likely is more concentration of VP which will further hurt the profitability of the remaining Validator, ending up with the same problem the proposer’s trying to solve.

    → Validating is permissionless, meaning that validators are free to opt out whenever they want ( example, when they’re not profitable). Imo, there is no need for a third party, with inaccurate data, to make that decision for them.

    → That being said, there are valid reasons to adopt a smaller validator set and they should be considered in their own merit. This is why I think the community should start a separate discussion about increasing the chain’s performance as there are alternative solutions worth exploring (mentioned above) before halving the Set.
    Better performance and low latency should be a priority but it’s important to remember that we’ll always lag behind CEXs and ‘semi-decentralized’ DEXs as decentralization ( dYdX’s competitive advantage) comes with its trade-offs.

    TL;DR The proposer’s rationale is flawed and backed with inaccurate analysis.
    Barring the 10% of revenue routed to the Treasury subDAO and halting the bridge, all proposed changes here needs to be further discussed, SEPARATELY. This isn’t to say that some of changes could prove beneficial to the chain but an approach of ‘if it didn’t work we’ll reassess’ isn’t the way to go here.

4 Likes

Dear RealVovochka !
This solution can be implemented immediately: all 60 validators can be located in Tokyo, and we can observe the latency and block time of the dYdX chain. I believe this solution can be done instantly, and we will then make a final decision on whether to reduce the validator set from 60 to 30 after this survey is conducted.

1 Like

Hey all, Mag from Skip here.

I understand that reducing the validator set would have harmful effects - namely validators would get cut out of the active set. I’m not here to comment on that (it makes me sad), only to comment on the technical side after years of working deep within the Comet codebase.

Reducing the number of validators would definitely speed up dYdX. There is no honest technical argument to believe otherwise. Centralized systems are the fastest, and as a network approaches a centralized system, it will speed up. The reality is that the gossip factor of Comet-based data (e.g. txs, votes, blockparts) is extremely high, so any additional data gets flooded around the network multiple times. With dYdX especially, which uses Skip:Connect (and therefore circulates heavy Vote Extension packets), there is a ton of data on the gossip network. The more validators there are, the more gossip there is, as each validator regossips the data it receives. This can throttle a network by preventing votes, proposals, and blockparts from reaching the selected proposer, who will effectively wait for the network to catch up.

Cutting down the number of validators will reduce the amount of traffic in the gossip network, will reduce the number of validators required to vote on committing blocks/proposals, and will concentrate stake into the remaining validators that are not cut. This will improve network latency in a couple ways:

  1. On average, there will be a lower minimum number of validators from which votes are required to reach the 2/3 threshold, and so blocks will progress faster
  2. If the validators are colocated, the latency between each peer will be much smaller, effectively increasing the gossip throughput of the network.
  3. There will be significantly less data in the gossip network overall.
  4. There is significantly decreased chance that the proposer selected is a non-functioning validator (the risk of which increases as there are more validators)

Now, there are many other ways to reduce the latency / block time of a network, that are currently being worked on by the Informal team (and the Skip team) - including mempool performance improvements, reducing the size of vote extensions, direct peering amongst validators, etc. These are not ready, but will be helpful to achieve the same thing. This does not change the fact that decreasing the number of validators and colocating them will definitively lower blocktimes and latency - it’s just a technical reality.

2 Likes

To clarify - I don’t like the fact that decreasing the number of validators would centralize the network further, and also damage the businesses of the many incredible hardware teams working on keeping dYdX secure.

I was asked to comment purely on the technical thinking around validator set size and block time / chain performance.

3 Likes

The main point was missed though. The maximum improvement of latency would be reducing the number of validators AND specifically the validators not based in Tokyo.

@nethermind suggested removing the bottom 30 validators not because of any latency improvements but based solely on incorrect data about profitability, they used some random $1k value of costs to justify the profitability and removing validators. So you cannot now use the recommendation of Nethermind based on innacurate profitability data to remove the bottom 30 validators as the best also to improve latency, it just makes no sense.

We need to identify first the validators not running from Tokyo and if the goal is really to improve latency the maximum eliminate all these validators, it is sad yes but they chose not to run from Tokyo despite recommendations. Again, you want to eliminate validators to improve latency ok but which validators?
-Those running far from Tokyo which empirically negatively affect latency and removing them will have the best possible improvement of latency?
-Or the bottom 30 validators because Nethermind suggested it, not because of latency improvements, but because of profitability concerns based on inaccurate and arbitrarily chosen costs to estimate profitability?

1 Like

Thanks for the analysis @nethermind

Directionally we agree with the proposed changes and think they will be beneficial to the dYdX Chain in the long term

We are effectively shifting protocol revenue towards more productive and positive actions. Building up the MegaVault should be highly productive for the protocol, however, I think its profitability should be closely monitored. There is no point in directing 50% of protocol revenue to MegaVault if it’s largely unprofitable

Lastly, we sympathise with a lot of the Validators that are affected by the reduction in the Validator set and think it would’ve been better to frame this reduction with the purpose of increasing performance as no one is forcing a Validator to join the network and it’s their decision to operate at a loss. If we are truly wanting to improve performance (which i think should be the number 1 priority) then the 50% cut likely makes sense, however, if it’s about validator profitability it could then make sense to initially reduce it down to 45 and assess if profitability improves

1 Like

If the geolocation of dYdX validators was very globally decentralized as well as the institutional users of dYdX you could then talk about randomly choosing 30 validators to eliminate, which could be the bottom 30. However, and this is key, the majority of institutional users of dYdX are near the Tokyo area as well as a good number of validators. In this situation to improve performance you CAN’T just randomly pick 30 validators to eliminate like the bottom 30, you must identify which validators are based far from Tokyo which in dYdX are around 30-40% of validators and eliminate those selectively if what we want really is to improve performance

The thing is, the author of the report @nethermind used an arbitrary assumed infra cost of $1k/month to justify that so many validators are running at a loss but this assumption is totally wrong and shows a very poor understanding of validator costs. There are three main types of servers: bare metal which you can rent at data centers, colocate in data center or run in your own data center, then you have Dedicate virtual servers which are virtual servers but not shared but still less performance than baremetal, and then Virtual private servers which are virtual servers shared and have the worse performance. For dYdX specifically since many are based in Tokyo it is important to highlight that bandwidth in Japan is very expensive, now to the analysis.

Of course if you go with AWS, GCP and similar you will pay insane high costs for the most simple and lowest specs servers. Also, given the block time of dYdX let’s assume virtual private servers or virtual dedicate servers wouldn’t be the best for performance but these would be the following options:

Without giving names, you can find providers of VPS and VDS for monthly costs much lower than the suggested $1k, in fact you can find very powerful VDS for less than $100. Moreover, you can find top bare metal servers in great data centers in Japan for around $100-300. If we add unlimited bandwidth in Japan some providers offer this for around $300, so you could have a top bare metal server with unlimited bandwidth in Japan for less than $500. But still a top virtual dedicated server with very high bandwidth and specs could be found around $100-200 in total, very far from the suggested $1000. Of course if some go to AWS, GCP or the most expensive providers and pay $3000-$5000 for the same services that’s their choice. But don’t imply that every dYdX chain validators has $1K infra costs per month. We use a top provider with bare metal server, unlimited bandwidth in Japan, we are consistently in the top 10 dYdX validators by best uptime and lowest MEV and we don’t pay monthly anywhere close to $1k for the infra costs, it is more like half of that to get the best possible performance, but for other validators with lower specs or not unlimited bandwidth they could easily have just around $200-300 monthly costs. So your whole analysis of validator profitability and suggestion to remove 30 validators based on that flawed analysis is just totally incorrect and inaccurate.

From the report: ‘The profitability calculation below assumes similar fixed infrastructure costs of $1,000/month and the staking APR net of emissions (inflation) has been considered. We acknowledge that infrastructure costs could be higher or lower, but use $1,000/month for simplicity.’

You use $1000 ‘for simplicity’, great argument. You didn’t conduct any research or survey to estimate how accurate is this $1k. The costs would be much higher yes like $5k or more and the whole dYdX active set would be unprofitable if all go to the most expensive providers possible. Companies don’t open factories in Switzerland with the higest costs and labor costs in the world, but they choose low cost locations. Similarly, the majority of validators don’t go to the most expensive providers for the same services, maybe Polychain or similar validators yes, but the great majority no. Which means most dYdX validators have costs much lower than $1k, not $1k or higher than $1k, therefore your profitability analysis is totally wrong and if it hadn’t been discussed and corrected your flawed analysis could have caused great damage to dYdX by eliminating many top contributing entities based on flawed assumptions

I think we can completely remove the cost of running from the arguments. No one is forcing anyone to operate a validator at a loss; it’s a business decision for each company. This is an open market, and using this as an excuse to reduce the number of validators is somewhat hypocritical. I would also say that even with the current VP distribution, dydx has decentralization issues. Why is no one questioning the fact that the top 1 and top 2 are a single entity, which is easy to trace through blockchain transactions?

1 Like

There haven’t been any changes in revenue sharing yet, but some are already anticipating changes and have undelegated 10M tokens. We must be careful with that changes

How did this discussion get strongarmed from:

Reduce # of validators == token go up

to

dYdX is too slow (which wasnt a painpoint ever mentioned before) == Reduce # of validators.

Strong level of mental gymnastics going on here.

Thanks @nethermind for commenting on some of the feedback directly with quotes, can we also expect you to react to some of the alternate ideas for revenue and token incentives that would still give the same end result of boosting the megavault and making dYdX more sustainable?

like

and

and

etc.

Lastly, i want to make the point that it should also be evaluated whether the USDC yield can be converted to dYdX through a buy program and only then distributed to vaults and users. This impact on liquidity might be larger than increasing the simple dollar denominated yield. There are many DeFi projects that do distribution in this way and see a large percentage of their stakers re-lock these tokens, taking sell pressure from the market in aggregate.

best,
Ertemann
Lavender.Five

PS. We can take the loss of being removed from the set, If it it happens so be it. I am just very opposed to the way of discussion on this and previous dYdX forum posts. Feedback is to be internalised, I will keep promoting that for a fair governance process.

3 Likes

Thank you for your reply.

To be clear, I agree with the goal of shifting revenue allocation to liquidity and improving chain performance.
It is the most important factor in attracting the platform’s key customers - retail traders, market makers and institutional investors.

On the other hand, I also believe it is important to get the message right.
Just as validator profit and loss estimates complicate the discussion,
Claims and rationales need to be carefully constructed.

Why I mentioned that the direction should be clearer,
I was concerned about the basis for some of your claims,
so only negative aspects might be spread.

Nevertheless, I think it is good that various experts in the community are sharing their wisdom and exploring alternatives in this forum.
(It reminds me of the Commonwealth era…)

As I have nothing more useful to offer on the topic, I will now watch the discussion.
I hope it will bring the community back to life again and create good synergy.

P.S. As I am based in Japan, I would like to add that @Cosmic_Validator claims are true.
To give an example, a bare metal server that meets the dYdX Chain’s recommended specifications costs approximately 302USD per month to run on “Sakura Internet”.

“Sakura Internet” is a large company, and listed on the Tokyo Stock Exchange.

It can therefore be said that operational costs can be eliminated from the discussion.

Disclaimer: I am delegating to PRO Delegators(@Govmos) so have no conflict of interest with Cosmic Validator.

1 Like

Hello dYdX Chain community!

I discovered that the Santorini Team undelegated a total of 9,507,021 DYDX just one day ago (transaction link) and has exited the dYdX chain validator set. In the past few days, the network’s locked DYDX decreased from 237 million to the current 227.85 million—a drop of around 10 million DYDX, primarily due to the Santorini team’s actions.

Hello dYdX Chain community!
We hope that the remaining 18 teams, including the other teams, will relocate as well—it would be amazing! However, I understand that if all 60 validators move to Japan, any issues with Japan’s infrastructure could easily lead to a chain halt for dYdX, similar to what Solana experienced in the past.

I see that currently 42 out of 60 validators are located in Japan, with a total of 197,162,523 DYDX locked, approximately 87% of Validator Bonded Tokens. If a few more validators also relocate there, we could observe the results and hopefully see how the latency of the dYdX chain responds with more teams moving over.