Most of the discussion in this thread is centered around the MegaVault fee distribution and the validator set reduction. For some context, the DEP has worked on initiatives that have had direct insight into both of these topics.
- We have worked with dYdX market makers, funded all 119 existing vaults to support liquidity on long-tail markets, and are now funding Gauntlet to serve as MegaVault Operator (subject to governance approval).
- Our work with the MEV Committee, responsible for monitoring orderbook discrepancies, has also included research into non-MEV related networking issues on v4.
First, it’s important to keep in mind that a competitive trading venue is dependent on two components:
- Orderbook liquidity. Can users easily trade on any market with minimal slippage?
- Execution. Can users execute with low latency and fast guarantees?
Bootstrapping liquidity with the MegaVault
Our previous efforts working with market makers proved that incentives are required to attract liquidity to long-tails. Market makers are not interested in providing liquidity unless compensated, the risk/reward ratio is too high otherwise. Vaults have been successful in providing the same service while maintaining a near-zero cost (~-4% today, but if fluctuates). In other words it’s working, but it doesn’t scale as a grants initiative – our capital constraints are high and our primary goal is funding contributors to dYdX.
Simply put, dYdX needs access to more liquidity to compete with other venues, attract more volume, and scale permissionless markets. The MegaVault offers a solution to this problem, but will need a substantial amount of USDC deposits to work. Though we should expect slight differences in execution with MegaVault, we see from our existing vault deployments that returns hover near (or below) 0%. Attracting deposits will require additional incentives.
By redistributing trading fees, the MegaVault can offer an attractive yield capable of offsetting potential strategy losses. Put differently, the protocol is compensating depositors for their willingness to risk capital in return for liquidity, just as we did with market makers. As liquidity improves, we can expect more competitive markets, which in turn should attract more volume and increase trading fees. Ideally, this growth increases fees generated, returning more to stakers over time.
Ultimately, we should think of liquidity as a protocol expense. We can either pay for it via community funding to market makers, or we can bake it into the protocol via trading fees. The latter is permissionless, adjustable, and much more sustainable.
Improving execution with networking performance
Through our work with the MEV Committee, we have repeatedly identified issues of inefficient order gossiping resulting in orderbook discrepancies. Orders submitted by traders don’t reach the relevant block proposer in time, resulting in worse pricing, slower execution, or both. This isn’t due to any malicious activity, but a result of networking problems.
It’s worth noting that practically every validator has, at some point in time, displayed issues with discrepancy. Even under optimal configurations, validators display discrepancies. This leads us to believe the network simply isn’t efficient enough to process trading demand at the expected performance level.
By halving the validator count, we effectively halve the network topology for an order to reach the next block proposer. We reduce peering requirements, allowing for more direct routes among validators and trading nodes. Theoretically, the reduction should allow orders to gossip more efficiently through a leaner network, improving discrepancies.
As far as I know, the number of validators at launch was chosen arbitrarily. There is no research or justification for why 60 is an optimal number of active validators. Instead, we’re learning from experience now that 60 is probably a sub-optimal amount. Why should we stick with a number that isn’t working? Reducing this number is a reasonable experiment to improve execution, making dYdX a more competitive trading venue. Thankfully, this is something we can continue to adjust or revert over time based on future learnings.
Obviously, as mentioned in the thread, we risk losing a number of high quality validator teams contributing to the protocol. It’s not an easy decision to make, but dYdX has to perform better on execution if it wants to compete with other venues.