Decentralising the V4 dYdX Frontend

Hi all,

Immutablelawyer here with yet another boring technical-legal post about progressively decentralising the dYdX Protocol. This post is meant to initiate a well-needed discussion with regard to decentralising the front-end layer of the protocol. This is a layer which is regularly disregarded when discussing the progressive decentralisation of a particular tech-stack yet, from a regulatory point of view, having a decentralised front-end could make or break a protocol. Firstly, let’s get into the different layers that actually make up a decentralised system (or that would make up a potentially decentralised one).

The image posted above is a clear example of what currently makes up the majority of tech-stack architecture within our industry (specifically with respect to on-chain finance i.e. DeFi, but also applicable to other sub-sets such as storage protocols, oracle protocols etc.). As you can see there are different elements present here with these different elements contributing (in a joint manner) to the overall decentralisation metric of a particular protocol. What is quite interesting about V4 is that the second layer within the tech-stack (the composable smart contract protocol), will be eliminated and thus, the dYdXV V4 (high-level), tech-stack architecture will look more like this:

For the purposes of this discussion, the main focus shall be on the Client (i.e. Frontend). As previously stated, to reach a level of full decentralisation the different components here have to be decentralised (naturally eliminating users – however, with regard to the Token and User Data, decentralisation here takes the form of ensuring economic decentralisation re. the token and ensuring technical and legal decentralisation re. collection, storage and safekeeping of user data – I potentially plan to post on all these different facets, but let’s get back to the topic at hand – the Frontend).

Why decentralise the Frontend?

The first question one might ask is why, after all, do we want a decentralised frontend layer within the dYdX Ecosystem? The two main points (among numerous others), to be addressed here are censorship and the concept of a ‘Single-Point-of-Failure’.

Censorship and Single-Points-of-Failure

Censorship is an important phenomenon to tackle when wanting to move towards full or what I more adequately call, optimal level, of decentralisation. To explain this I shall provide a very recent example. Uniswap is one of the largest DEXs within our industry. The structure is made up of the Uniswap Foundation, Uniswap Labs and the Protocol (the tech-stack). Now, Uniswap Labs depicts itself as a mere frontend provider of the Protocol (we know that they are also the main active software developers of the Protocol – but that’s a separate argument altogether) however, they developed both the Frontend and the Protocol itself (they have since decentralised many functions).

Uniswap had 3 frontend related events in 2022. The first related to the Privacy Policy (and this relates to the User Data component of the above diagram). The updated Privacy Policy stated that the DEX collects certain on-chain and even off-chain data connected to users’ wallets. As pointed out by the block (The Block: Uniswap's new privacy policy says it collects data tied to user wallets) :

“It (Uniswap) clarified that publicly-available on-chain data is analyzed to help make informed decisions. As far as off-chain data goes, Uniswap claimed it does not gather sensitive personal data like names, emails or IP addresses. However, other off-chain web identifiers related to users’ activity on front end website are still scraped, the exchange noted.”

Our main concern here relates to the off-chain data i.e. data collected from the frontend. The centralisation of the Uniswap frontend (hosted by Uniswap Labs), led to a high degree of centralisation. Due to the amassed simultaneous on-chain and off-chain data collected by Uniswap Labs, it could theoretically link and IP Address to a real address to the on-chain data being a wallet and the transactions of that wallet. Now, some might say, “But IL, their frontend code is open-source, they’re doing all they can!”-> I’ll get to this in a bit.

With regard to on-chain data, we cannot make any arguments, on-chain data is transparent and immutable in nature – this is the beauty of the blockchain. What we can do, is providing for and catering for mechanisms which make sure that off-chain data (User Data), is not merely funnelled through one front-end which, if compromised, could lead to a data-leak like no other – leading to IP Addresses being linked to large wallet and god knows what follows.

The second frontend related matter Uniswap experienced regarded the main frontend (which funnels almost all of the Protocol’s volume) going down in November of 2022. Again, here we have a clear example of a Single-Point-of-Failure. The front-end went down, the token crashed until it was back on, volume went down, a lot of people publicly stated that they’d move to use other DEXs. This is the result of having a Single-Point-of-Failure. Were Uniswap to promote and incentivise multiple front-ends, the aforementioned negative results would not have been an issue or would have been greatly diminished.

The last, and most important, Frontend incident dates back to August of 2022 when Uniswap Labs blocked 253 addresses from accessing its Frontend. The 253 blocked crypto addresses could continue to use Uniswap’s smart yet, they were blocked from accessing the popular Uniswap website, which is a frontend managed and maintained by Uniswap Labs, a New York-based company.

These address blocks were the result of US Sanctions (kudos to them as these were some bad people for the most part – sanctions relating to heinous crimes were linked to most wallets - uniswap-trm.csv · GitHub). Here’s the issue: as pointed out by @banteg (^tfw|twcamp^tweetembed|twterm^1560711585190100992|twgr^d45096ff27b874eb31e10afc4ab32d48426266b1|twcon^s2_&, circa 30 out of the 253 blocked addressed (12%), merely had associated ENS names to the address holders on the US Sanctions List and were thus, collateral damage. As pointed out by ZachXBT – some of these addresses were NomadWhiteHats (

What does the above mean?

The fact that Uniswap Labs has the capacity to censor sanctioned users also means that Uniswap Labs also has the capacity to censor (remove access to the frontend), to any user based on an arbitrary choice (case in point re. those caught in the crossfire above). Hence, where a frontend is centralised, we have a situation where one may abuse of this centralisation or rather, misuse it. Furthermore, this also gives rise to a situation where Uniswap Labs might be ordered to remove frontend access to Person X for an act performed in the United States – but maybe that act is not a crime within the European Union – thus, that person would be able to use an EU-Hosted frontend (this is just a high-level example – other examples could relate to the US party-in-power wanting to infringe on an opposition party’s financial liberties and ordering Uniswap Labs to comply with blocking such a person from accessing the Frontend).

Potential Solutions


(P.S. I used Uniswap as an example due to the substantial similarity of its corporate structure to that of dYdX – specifically in relation to having the private company behind the Protocol also running the main frontend in the case of dYdX).

Following that example-giving section of the post we’ll now move on to a potential solution that could be used to avoid any of the three scenarios experience by Uniswap – I am a big believer that those who do not learn from history are doomed to repeat it – so let’s not repeat it!

I will now enlist a cool concept that could potentially be developed on Cosmos, and also a potential solution that could be leveraged to progressively decentralise the frontend layer of the dYdX Ecosystem. The first is not currently live and not available on Cosmos due to its reliance on ENS Domains - Dappnet.
Dappnet is a permissionless application network built on IPFS and ENSDomains. Dappnet addresses the issue of the current predominant access to dAPPs via DNS and servers (which are centralised in nature), and leverages ENS and IPFS hosting to create a ‘BitTorrent-like P2P Network’ (in fact Liam is currently discussing moving from IPFS to BitTorrent). More details on Dappnet can be found in Liam’s tweet here (, and in the GitHub Repo here (Issues · gliss-co/dappnet-features · GitHub) – this is currently a work in progress. I’ve been looking forward to this for a while. However, unfortunately, this does not solve the issue with dYdX as it functions through ENS Domains. So let’s move on to our next solution.

The Liquity-Model

Previously, I mentioned the following: “Now, some might say, “But IL, their frontend code is open-source, they’re doing all they can!”-> I’ll get to this in a bit.” – let’s get into this. Open-sourcing the frontend code is not enough to decentralise your front. Merely open-sourcing the codebase of the frontend is the equivalent of stating that one needs to use electric cars without adequately incentivising electric car usage (in my home country this is done through subsidies, government grants etc. – an interesting parallel to draw).

Liquity did just this. Liquity itself does not run its own frontend at all – it actually instructs users intending to use Liquity’s features to choose from a list of Frontend Operators (Liquity | Frontends List - Use Liquity). Hence, to open loans, make deposits etc., users thus have to use one of the frontends provided by third parties (an explanation via video material can be found here: Liquity Runs on Decentralized Frontends - YouTube).

Frontend Operators provide a web interface to the end-user enabling them to interact with the Liquity protocol. For that service, they will be rewarded with a share of the LQTY tokens their users generate.
LQTY rewards are awarded to Stability Pool depositors and then proportionally shared between the users themselves and the Frontend Operator. How much each party gets is determined by the Kickback Rate which is set by the Frontend Operator and can range between 0% and 100%. Each Stability Pool deposit is tagged by the Ethereum address of the front end through which the deposit was made. This address is where the frontend’s LQTY rewards accrue.

Setting a high Kickback Rate will make the Frontend Operator attractive to users, but offering a nice interface and additional functionalities might allow for a lower kickback rate while still garnering user interest.

To incentivise this users to act as Frontend Operators, Liquity allows these Operators to set a rate between 0 and 100% that determines the fraction of LQTY that will be paid out as a kickback to the Stability Providers using the frontend. If a frontend set the kickback rate to 60%, their users would receive 60% of their earned rewards while the frontend receives the remaining 40%.

This is a simple, yet effective way of making sure that users are adequately incentivised to run frontends. In the case of Uniswap, the lack of incentivisation re. the operation of alternative frontends has naturally led to a stagnation in alt-frontends, and thus a centralisation and Single-Point-of-Failure in the Uniswap Labs operated frontend.

I firmly believe that this model should be put up as a Request For Proposal on the dYdX Grants Page so as to request research to be conducted on the possibility of adopting this model on Cosmos – or alternatively, set up a working group to tackle this optimisation. This, naturally, will lead to other issues that need to be taken into account – namely, ensuring that the economic incentives distributed to frontend operators are distributed in an equitable manner so as to not hinder the Chain’s economic decentralisation. There are a myriad of ways to achieve this and one may not merely incentivise users to operate frontends via dYdX Tokens. I for one, would be rather in favour of incentivising Frontend Operators via USDC with the Frontend Operator getting a USDC kickback based on the volume traded by users through that particular frontend. This model would also bring in another set of Ecosystem Participants (additional to market makers and node operators), as users who cannot afford to meet the hardware requirements to set up a node due to high barrier to entry, would still be able to participate within the Ecosystem and contribute to its progressive decentralisation by operating a frontend (which naturally entails a substantially lower barrier to entry from a cost perspective).

Disclaimer: The operation of a Frontend could potentially trigger regulatory implications based on where the frontend is being operated from – the operation a frontend operator within the EU could be deemed to constitute the unregulated/unlicensed solicitation of users to use dYdX (which would breach current provisions in relation to financial services and crypto asset services). Hence, the research conducted should also take into account an assessment of the regulatory implications that could potentially be incurred by virtue of operating a frontend so as to make sure that frontend operators are aware of the risks and pitfalls associated therein.

Thankyou for taking the time to read through this post – I hope this serves as a catalyst to initiate discussions with regard to decentralising the frontend layer on dYdX v4.

Any feedback or discussion is always appreciated!


Hi @Immutablelawyer,

We are incredibly pleased to read your comprehensive forum post on the subject of progressively decentralising the dYdX Protocol’s frontend layer. Your information is valuable and provides an excellent starting point for further discussions.

We couldn’t agree more that a decentralised front end makes a significant difference in a protocol regarding censorship and eliminating single points of failure. Your examples, particularly those from Uniswap, are insightful and give a good understanding of potential issues arising from centralised frontends.

The possibilities you mentioned regarding creating unique frontends and exploring how hosting can be managed are intriguing. Looking into solutions such as Dappnet and the Liquity Model can offer valuable insights into tackling frontend decentralisation.

As you suggested, a research paper on this topic is an excellent idea, especially considering the potential regulatory implications and the need to ensure economic decentralisation. We appreciate your thorough analysis and would be eager to see more discussions and possible solutions in the future.

Thank you again for initiating this meaningful conversation, and we look forward to participating in further discussions and developments in this area.

Fox Labs


Hey @Immutablelawyer, Josh from the dYdX Foundation here. Thanks for this. Great post!

A couple of questions that came to mind while reviewing -

  • I think a consistent user experience is important. How do we ensure that front-end operators are all running the most recent version of code? Do we think users will avoid front-end operators with out-of-date code?
  • Is there a critical mass or target amount of FE operators to achieve sufficient decentralization - how many FE’s and how much flow going through each?
  • Any potential conflicts of interest with FE operators participating elsewhere in the ecosystem? If so, what mitigation, if any?



Excellent post as usual, @Immutablelawyer.
I believe it would be a great idea to merge a decentralized frontend with a referral program that we can create. This way, traffic providers will be motivated to use the latest version of the frontend in order to receive the highest rewards for the users they bring in


Hi @Josh_E_Wa,

How about considering a combination of a consistent design that anyone can host, along with the possibility of unique front ends as well?

This approach could foster innovation and draw more users to the dYdX platform. We can achieve a balance between consistency and uniqueness by providing a main front end that anyone can host, adhering to a set of guidelines that ensure a fundamental level of functionality and user experience.

Simultaneously, we can empower users to develop their own distinctive front ends, incorporating additional features and benefits. This strategy would promote healthy competition among front-end operators and incentivise them to enhance their offerings to attract users. To preserve overall consistency, these custom front ends should comply with a set of core principles and standards.

Regarding your concern about keeping the standard front ends up to date, we can introduce a version-checking mechanism that notifies users if they’re accessing an outdated front end. This would motivate front-end operators to update their code in a timely manner, as users will likely steer clear of outdated versions due to potential security risks or missing features.

Best regards,



Hey @foxlabs !

Firstly, thanks for being an active Forum participant and for contributing value to discussions - it is much appreciated from my end.

There are also other points which are worth adding to the post which I might add in due time (potentially tomorrow). One point of interest is that it just so happens that the community seems to have a lot of reward-oriented proposals of points of discussions (some initiated by you, others by @Callen_Wintermute , and even this one initiated by myself).

Maybe it would be a good idea (in the spirit of achieving an effective reward-model), to collaborate on one proposal implementing all these facets in the near future so as to avoid fragmented proposals which lead to overlapping changes on the same subject matter.

Just a thought from my end - thanks a lot for the comment!

1 Like

Hey @Josh_E_Wa !

Pleasure to be conversing with you yet again - It’s a positive sign to see faces from the Foundation contributing here! I shall address your questions below:

Re. Point 1 - this is most effectively achieved by virtue of updating the corresponding Github repos that contain the code to the respective Front-end - with updates to the underlying SDK/Frontend Kit being communicated through GitHub + other community channels. In Liquity, Frontend Operators are given two options - to launch a Frontend using their Frontend launch kit, or alternatively, you can also use the Liquity Frontend SDK and the middleware library they provide to create your own front end application. Hence, you could either opt to provide for one type of frontend, or allow for iterations of that frontend as even pointed out by @foxlabs (as you can see there are multiple iterations/integrations here

At all stages however, (naturally) the function of the Frontend is still as per the above image ^

Re. Point 2 - There isn’t an agreed-upon lower or upper threshold to assess frontend decentralisation/centralisation (these thresholds are not even present re. PoS Blockchains vis a vis nodes). We have to consider that frontend decentralisation is actually a novel concept and thus, we would actually be setting the standard (together with other Protocols like Liquity), in regard to FE decentralisation. With regard to the flow going through each, we would have to come up with a system that takes into account a balanced-incentive model wherein users are incentivised to use front-ends with a lower usage rate with the possibility of achieving a form of incentive by using such a less-used FE. This is the same concept that is prevalent in certain LPs/Staking Modules - wherein popular LPs/Staking Modules have lower APYs and lesser-used LPs/Staking Modules have a higher APY intended to trigger further deposits (conducting research in this regard would be very interesting). Finally, I would opine that I deem node count and FE count to be equal in importance. However, one phrase that rings ever so true in mechanics such as this is that the more, the merrier!

Re. Point 3 - I personally do not see any conflict of interests with a node operator for example - also being a frontend operator. Actually, I would opine that at the initial stages, I would encourage each node to set up and host its own separate FE. The only concern I have with regard to FE hosting is with the Foundation/Trading operating a front-end (concern is mainly based on regulations + decentralisating certain key and pivotal roles from these entities). I would suggest that the Liquity approach is taken and that dYdX Trading Inc. for example actually winds down the FE it hosts and decentralises this key function to the community (I created a high-level post on decentralisation and its pre-requisites here: IL Rants #1: Understanding Decentralisation).

I hope I addressed your points as effectively as possible. If I missed anything or something is unclear please do reach out!


The other rewards systems discussed are based on dYdX rewards, which will eventually come to an end. In contrast, incentives for frontend operators should ideally be derived from a percentage of fees in the form of USDC. This approach ensures that frontend operators have a sustained motivation to continue providing their services even after dYdX rewards have ceased.


Very valid point @foxlabs - glad that we agree re. using USDC as rewards. I think it would be a better incentive than dYdX due to its stable nature as well (some might not want to earn dYdX as a reward due to price fluctuations).

Thanks for contributing sir!

1 Like

Great post!

Here are a few questions:

  1. If a frontend operator hosts an unblocked (either open to US or doesn’t block sanctioned persons or regions) version of the frontend, won’t this increase potential liability for the dYdX DAO and possibly dYdX Trading and Foundation?
    a) This topic is admittedly complicated because the frontend could be opened up in a foreign country (e.g., Russia) that is outside of the scope of the CFTC, and as a result, there would be unfettered US retail access through that frontend to derivative transactions. In this case, the only entity violating any laws would be the Russian entity, but it can’t be stopped, so the CFTC might try to go after the DAO or dYdX Trading for making the protocol available.

  2. How do we keep frontend operators from rugging users?

1 Like

Hey everyone, Nico from the team here - we have shared some ideas on how to decentralize frontends in the past with DyDx through the concept of NFAs (can find it fleekxyz/non-fungible-apps on GH).

We’ve evolved the idea a bit and wanted to share a refresh that could help inform this process, or give an option that might adapt to the explorations you are running right now.

NFAs as a web3 app frontend’s code on IPFS, referenced in the source of an NFT, that the user could mint a copy of, loading the app directly from their wallet:

We’ve taken a step back to redefine how accessing an app could become more decentralized through the concept of NFAs. A good parallelism to make would be the App Store.

Developer uploads a package to the App Store > The user installs a copy of the package.

So, what if we simply use NFTs as what would be the app packages, and have users “install” or get access by minting their own personal copy? Today, NFTs for image assets simply reference the source of the image file stored on IPFS, and any wallet can load that either directly via IPFS or using a gateway pointing to the specific asset on IPFS. Instead of building a new standard for NFAs, we can use regular NFT standards, and use the IPFS source field to point to the app’s frontend code hosted on IPFS (today NFTs do the same thing, but the IPFS hash is simply for an image, vid, etc.).

  1. An App mints their app as an NFA, putting the FE code on IPFS, attaching that as the NFT’s source.
  2. A user mints a copy of that NFA and it becomes visible in their wallet, today as it’s a normal NFT.
  3. User clicks or opens the NFT, loading the HTML frontend of the app via IPFS on a new tab.

In concept, it’s like installing an App on your phone from the app store, having wallets becoming a “tiny OS” for apps where you mint and add the apps you want to use to your collection. In a nutshell, each user would have their “own personal access” to any app they mint a copy of, unique to them. Since the content of the NFT is HTML code on IPFS, when opened, the browser will simply will render the app’s frontend, fully functional. No need to run domains, or even a main unique FE at all in any way.

This gives users more tentative options to resolve/access apps directly:

a) Today, they could load the app from the NFTs source on the wallet, which would open the app via an IPFS gateway much like Metamask opens a new tab to show you the image of the NFT on IPFS (see attached video).

b) Alternatively, either via browser (e.g. Brave) or wallet direct integrations with IPFS, if a wallet detects the user holds an NFA, they could load it directly from IPFS if a direct call.

c) Other options that could be explored maybe are saving the code locally on the browser/wallet and loading it directly from local-storage; or trying alternate storage layers instead of IPFS…

Here’s a quick demo:

The above is a very quick n’ dirty showcase, instead of adding the NFT/NFA manually to the wallet, it would automatically detect and show it, given most wallets use Open Sea’s API (we could have an “add to wallet” regardless). If wallets recognize NFAs as a different type of NFT, they could also have the wallet open a new window directly when the user clicks the NFT icon on the gallery, instead of having to search for the link in the metadata as well.

From the app and user perspective, let’s say Uniswap -following the vid example above- has their app available as an NFA. Instead of having "Enter App” button on their homepage to enter app.uniswap… the site could have a button set for “Mint App”, which would trigger the mint of a copy of the NFA NFT into the user’s wallet, giving them the NFT, and hence their own access to the app via the wallet.

However, the Uniswap homepage itself could detect when the user already owns/holds the NFA NFT, and instead of having the user open their browser wallet to open the app, the homepage’s button could switch from “Mint App” to “Enter App” automatically, opening a new tab with the user’s personal NFT access to Uniswap when clicked.

This is mostly the general idea of the more seamless/integrated flow we want to achieve. Closer integrations of NFAs into wallets, which should just be minor tweaks, would drive the experience home. This summer we’ll start building those bridges and preparing some showcases to aim for day 1 support across wallets.

There are a couple paths to explore on how to improve this base concept, how apps are updated, what option is best to ensure safe and trusted rendering/resolving, but figured we should share to fuel ideas as we are working towards similar goals. Happy to do a demo for the dYdX community if interested.


That sounds like a great idea, indeed, @nico-fleek. It’s remarkable how you’re leveraging NFTs and IPFS to revolutionise how we access and use decentralised apps. It’s a very innovative way to empower users and increase decentralisation.

However, this might be a bit daunting for those new to DeFi and not as technically minded. While the system you’ve described is impressive and has clear benefits, it also introduces several new concepts and steps that could overwhelm newcomers. This is especially the case when minting their copy of an app, a process that might not be familiar to many.

There are a few ways we could address this concern. One is through comprehensive and user-friendly documentation and tutorials that explain the process in a clear and accessible way. Another is through intuitive UI/UX design that guides users through the process with clear instructions and prompts.

Furthermore, a gradual rollout of a system like this could be beneficial. This would allow users to gradually become familiar with the new process rather than being forced to adapt all at once.

Overall, this is an incredibly promising idea that could push the boundaries of what’s possible with DeFi. The potential challenges are substantial but surmountable with careful planning and user-centred design.

1 Like

Much appreciated! That is on-point, UX is the main worry to consider when thinking how we can disrupt a bit the traditional way of accessing apps in favor of decentralization.

Based on the flow we are envisioning, we could handle the flow in a way that tries to replicate the current experience as much as possible with the difference of the user having to mint the app before entering it (ex. detecting the nfa and rerouting/loading the nfa from the wallet NFT metadata in a new tab when they hit enter app, etc.). NFA’s don’t necessarily need to be an exclusive approach to decentralizing the front ends, it can be value additive to any other approach they take to avoid those concerns, including integrating an ENS/DNS on top of the NFA itself to maintain a main public access point for the dapp.

On mobile it could get pretty interesting and improve the current wallet<>browser or wallet<>in-app browser flow. Instead of relying on those options, wallets could have an app-browsing tab showcasing all app NFAs available, mint/add it to the user’s wallet, have their own app collection much like an App “home-screen”, and load them in a frame in-app. Using on-chain data we could expand the experience to showcase trending apps based on mints, token-gate experiences, and provide incentives/rewards to holders.

Some future-thinking, but a good thinking exercise overall for what the concept of a tokenized app could enable :slight_smile: - one of the main perks being on the security and phishing avoidance field ofc, potentially giving users control over when/how their dapp frontend updates sounds like an interesting idea given recent issues with things like Ledger’s firmware update.

These are mostly flow ideas based on current dapp user experience though, I’m sure there can be a lot of cool ways or explorations on how to implement this, and it could be attached or flexible to each provider’s front-end setup. If we use current primitives, the setup overall is pretty much wallet and chain agnostic too. In any case, it’s great to discuss and pull the thread to understand what changes or considerations should be taken beforehand. In any context, the education and experience surrounding any of the possible approaches has to be superb to make sure it brings value without complexity.

1 Like

Hey @Immutablelawyer - Thank you for writing this excellent problem statement.

I really like the Liquity model, but if you want to manage a dYdX DAO authoritative frontend you might consider the following proposal.

Disclaimer: My name is Dominic and I’m working as a Solutions Architect for DFINITY, the foundation behind the Internet Computer.

The Internet Computer is a decentralized tamper-proof computing platform, that employs what we call chain-key technology. In short, this means the Internet Computer has a persistent BLS public key, and a client can verify messages from the Internet Computer by checking 1 (or 2) BLS signatures. The ultimate light client. This light client, we call this response verification, can be integrated into a service worker, a local proxy, or eventually directly into browsers.

This allows to build full-stack dapps on the Internet Computer, and allows to securely host frontends on the Internet Computer. Actually, our website, is an example. When you access the site, you’ll see that first a service worker is installed, which verifies the authenticity of the assets that are fetched from canister smart contracts on the Internet Computer.

What’s interesting in addition to just hosting the assets, is also the process of updating. Uploading new assets can be controlled by a canister smart contract, and this canister smart contract can even use HTTPS outcalls to fetch a governance voting result from dYdX to approve and deploy a new version of the frontend. For even more security, you could also implement IBC verification on the Internet Computer.

Happy to answer any questions you might have.


I see I am late to this thread but I think this is a great point on potentially malicious front ends. In fact, Liquity includes a disclaimer on its website that “Liquity AG does not make any statement regarding technical functionality and/or the trustworthiness of the Frontend Operators listed below”.

Front ends might be malicious by tricking users into signing undesirable transactions, mining unnecessary personal data, etc… Harkening back to @Josh_E_Wa’s question, we could also imagine market makers (or other programmatic users on dYdX) spinning up their own front ends and trading through them to earn rewards, essentially manufacturing a fee rebate at no benefit to the protocol. This would be especially the case if front end operators earned a portion of the fees as rewards without going through a vetting process.

While I am very interested in the idea of incentivizing decentralized front ends in a competitive market (thanks for elucidating it so clearly @Immutablelawyer!), I believe there are a number of design decisions that would make or break this program. As mentioned, one of them might be whether governance should/could vet front ends to indicate to the community that they are not malicious. Perhaps this could be a necessary process for the front end to earn any rewards (whether USDC or DYDX rewards) for hosting the service. This might be streamlined via a subDAO with a specific SLA to vet front ends, indexers, or any other decentralized components of v4.

Granted, including governance in this process would create additional friction, and we may consider alternative solutions. But it seems important to consider how to (a) protect users from malicious front ends, and (b) avoid remunerating teams deploying malicious front ends.