Defending Against LST Monopolies

Defending Against LST Monopolies

Co-authored by Callum Waters, NashQueue, Jacob Arluck, Barry Plunkett, and Sam Hart

any dumb ideas are my (Evan’s) fault, any good ideas in this post are either directly stolen from or would have been impossible to formulate without the immense amount of discussion with my colleagues at Celestia Labs, Osmosis, Iqlusion, Skip, and the rest of crypto twitter. This post kneels on the shoulders of PoS giants who don’t even need their last names mentioned, such as Sunny, Dev, Zaki, Bucky, Jae, Sreeram, Vitalik, and countless others. S/o to Rootul Patel for editing!

Liquid Staking Tokens (LSTs) set in place incentives to establish an external monopoly over the validator set. Such a monopoly threatens to exploit the native protocol and its participants for an undue portion of profits while making attacks significantly cheaper.

By combining two existing mechanisms in the same protocol, any third party can fork and instantly convert an existing LST, redirecting any value capture. This ability dramatically lowers the switching costs, encouraging competition and providing recourse to users in the event that an LST is capturing too much value. Essentially, native protocols can defend against LST monopolies by lowering the cost to vampire attack the LST protocol.

The two mechanisms are:

  • In-protocol Curated Delegations
  • Dynamic Unbonding Period

The first mechanism significantly lowers the barrier for third parties to create their own LST by adding in-protocol tools. The second changes the unbonding period to be dynamic and determined by the net rate of change of the validator set. This allows for the instant conversion between identical LSTs while also increasing the cost of existing attack vectors.

In-protocol Curated Delegations

One mechanism to create LSTs is described here, however any will work. Feel free to skip to the next section if you already have a mental model for them.

As summarized elegantly by Barry Plunkett:

LST protocols fundamentally do two jobs:

  1. Validator set curation
  2. Token issuance

Curated subsets of validators are useful because delegators can easily hedge slashing risk and staking profits, while also selecting for broader more subjective preferences such as decentralization, quality of setup, or to fund specific causes.

Delegations can be made partially fungible by swapping the native token for validator specific tokens. This is identical to normal delegation, except that delegation can be transferred to a different account just like the native token. Each validator token is fungible with itself because it represents identical slashing risk.

For a user to mint a new LST, the protocol uses a predefined validator subset to mint validator tokens. Then those tokens are wrapped together to form the LST. An LST is just a “basket” of validator tokens.

To create a new LST that had five validators all with equal ratios of voting power, a third party would pass the validators and their respective ratios in a tx. This would result in a simple basket token that looks like this

basic_basket

If a validator gets slashed, then there are fewer native tokens that collateralize their validator tokens, and therefore their validator tokens are worth less. However the ratio of validator tokens stays the same. For example, assume there is a basket with equal ratios of two validators and one of the validators gets socially slashed entirely. The basket token is still worth the same number of validator tokens, however now the validator tokens of the validator that got slashed have no native token behind them, and therefore the basket token is now only worth half of what it was before it got slashed.

The key to understanding how third parties can update the composition of the basket and how conversion between different LSTs works, we must first understand a new unbonding mechanism.

Dynamic Unbonding Period

Even if protocols don’t want to add native curated delegation tooling, they need to change their unbonding mechanism to account for the existence of LST protocols.

LSTs make it significantly cheaper to attack the native protocol. To change to a malicious validator set, attackers no longer have to use the native token. Instead, they just have to attack the LST protocol.

This is incredibly problematic, because stakers are incentivized to use an LST, and its impossible to stop them from doing so. By this same logic, it’s also impossible to create an LST protocol that makes attacks expensive.

Since users are incentivized to use LSTs and LSTs effectively enable an attacker to avoid slashing that would normally occur in the unbonding period, native protocols need to rethink the mechanisms to make attacks more expensive.

Any new mechanism to make attacks more expensive MUST be enforceable by the protocol. If a mechanism can be bypassed by wrapping a token and if users are incentivized to do so, then they will bypass it (note that not all co-authors agree on this point). One such mechanism that can be enforced is to limit the rate of change of the validator set.

If it is true that LSTs allow for attackers to bypass slashing during the unbonding period, and that native protocols can make attacks more expensive by limiting the rate of the validator set, then it makes sense to change the unbonding period to be based on the rate of change of the validator set. Instead of their being a flat period of time for each unbond, the time can be dynamic depending on how much of the validator set has been changed.

Making Attacks More Expensive by Rate Limiting the Change in the Validator Set

Since networks can’t stop stakers from using LSTs, they need some other mechanism to increase the cost of an attack. They can do this by putting a constraint on the rate of change in a validator set.

The longer it takes to change the validator set, the more expensive it becomes to attack the network. Limiting the rate of change of the validator set does not guarantee that attacks are not profitable, it only makes them less profitable.

For example, assume there is an LST protocol that determines the native protocol’s validator set via token voting with its own separate token. Attackers could borrow the LST protocol’s token, change the validator set of the native protocol, and then return the borrowed LST protocol tokens. Currently, this attack could in theory occur in a single block.

If the rate of change of the validator set is constrained, then, assuming there is premium charged for borrowing the LST protocol’s tokens, the attacker at least has to pay more. Again, this does not guarantee that attacks are not profitable. It simply makes them less profitable. PoS is arguably still fundamentally flawed.

Unbonding Queue Logic

Limiting the rate of change of the validator set can be achieved by using a queue. Each block, the queue is processed until the rate of change has exceeded the allowed limit. If a given change does not increase the rate of change of the validator set, then the queue is bypassed and that change is applied immediately.

The exact curve that determines the rate of change of the validator set is a bit arbitrary, and not discussed thoroughly in this post. This requires extensive analysis. In this curve, if the validator set is changed less than 10% for the last unbonding period, all conversions and unbondings are instant. Once that threshold is passed, conversions and unbondings enter the queue.

Therefore, there does not need to be a flat timeout that all stakers wait. If the validator set has not changed significantly over the unbonding period, then the queue will be empty and delegators will be able to safely unbond within a single block. After they unbond, the rate of change is increased. If the validator set has changed too rapidly over the course of the unbonding period, then the delegators will not be able to unbond instantly. Instead they will be appended to the queue, and their unbonding will be processed eventually.

One potential curve is plotted below for demonstration purposes only. It is not proved to be secure.

Here’s some very crude psuedocode that hopefully clarifies the high level queue logic. In this example, each change to the validator set gets “compiled” to a Change struct. For example, if a delegator wants to unbond, or convert between LSTs.

// Change represents a change in the validator set. This can be compiled from
// various actions such as redelegating, converting between LSTs, changin the
// curation of an LST, etc.
type Change struct {
	// addr is the address of the issuer of the change. This could be an an individual delegator
	addr Address

	// VotePowerDiff details the voting power changes per validator
	VotePowerRatio map[Validator]big.Int
}

// ProcessChange depicts the the very high level logic applied to each change as
// soon as it is received. If the change results in a decrease in the rolling
// diff of the validator set, it is immediately applied. If the change increases
// the rolling diff, it is added to the queue.
func ProcessChange(valset []Validator, queue []Change, rateOfChange Rate, c Change) (valset []Validator, queue []Change, rateOfChange Rate) {
	// calculate the percent change in the validator set
	delta := percentChangeInValset(valset, c)

	// immediately apply the change if it is negative or equal to zero
	if delta <= 0 {
		// apply the change to the validator set
		valset = ApplyChange(valset, c)
		rateOfChange.Add(rateOfChange, delta)
		return valset, queue, rateOfChange
	}

	// add to the change to the queue if it cannot be immediately applied
	return valset, append(queue, c), rateOfChange
}

// ProcessQueue is ran each block to empty the queue. It continually applies changes until
// the max rate of change is reached or the queue is empty.
func ProcessQueue(valset []Validators, queue []Change, rateOfChange Rate) (valset []Validator, queue []Change, rateOfChange Rate) {
	// continue to apply changes until the max rate of change is reached or the queue is empty.
	for len(queue) > 0 {
		// get the first change in the queue
		c := queue[0]

		// calculate the percent change in the validator set
		delta := percentChangeInValset(valset, c)

		// check the effect of the change. If max rate of change is reached, break
		if rateOfChange.Add(rateOfChange, delta).GTE(MAX_RATE_OF_CHANGE) {
			break
		}

		// apply the change to the validator set
		valset = ApplyChange(valset, c)
		rateOfChange.Add(rateOfChange, delta)
		queue = queue[1:]
	}

	return valset, queue, rateOfChange
}

Besides improving the security of the network, the second main benefit of this new unbonding mechanism is the ability to minimize the time it takes to convert one LST to another. This is an identical problem to unbonding, and therefore must be constrained by the same rate and added to the same queue.

Conversion Between Different LSTs

Similar to unbonding, the conversion between two LSTs is regulated using the same queue that is constrained by the percent change in the validator set.

For example, if the total voting power is 100tokens, and the max rate of change over the unbonding period is 33%. The maximum number of LST tokens that could be converted to a completely different LST within the first unbonding period would be 33tokens. In order for the rest of the 67tokens to be converted, the stakers would have to wait a total of 3 unbonding periods.

However, things begin to get interesting if LSTs are not completely different.

For example, in a similar scenario where the total voting power is 100tokens and the max rate of change is 33%. If we wanted to convert all 100tokens to a different LST, but the LST being converted to only results in 50% change in voting power, then it would only take half as long since the validator set is only changing by 50% and not 100%.

Similarly, each conversion does not always result in an increase in the percent change of the validator set. It’s equally possible that a conversion of LSTs reduces the change in the validator set, in which case the conversion is allowed to be performed instantly.

Updating Curations

Updating the compositions of a validator subset is an isomorphic problem to converting between LSTs, and thus must go in the same queue mechanism.

Further Optimizing Queue Processing by Cutting in Line

The processing of the queue can be further optimized by adding the ability to batch opposing conversions. For example, if there are two conversions in queue, the one at the top of the queue (the one that has to wait the longest) can be batched with one in the bottom of the queue (will be processed sooner) if it reduces the total change in the validator set when combined with the sooner one. This allows for those at the end of the queue to “cut in line”, which reduces the total size of the queue by merging two conversions, and reduces the total change in the validator set.

Batching is a complex problem that we likely don’t want the state machine to perform each block. Since users likely want to unbond or convert as fast as possible, this motivation will incentivize the manual searching for and paying for gas to batch unbondings/conversions in the queue.

Stopping Queue Spam

To stop users/bridges from filling the queue with unbounded amounts of conversions, and thus stopping all users from being able to convert their LSTs or unbond, there must be one more rule. This rule states that users must not be able to fill the queue with more conversions or unbondings than they have delegations. For example, if a user unbonds half of their delegations, but then attempts to convert all of their delegation to a different basket (aka LST), then there must be some arbitrary mechanism to prevent this. One implementation could be to cancel the unbonding/conversion that is earliest.

Incentivizing an Empty Queue

Identical to existing unbonding mechanisms, users/bridges will stop earning fees once entering the queue. This means that they are incentivized to only change the composition of their baskets or convert slowly or only if the queue is already empty.

Defending Against Monopolies

Monopolies are bad. Mainly because they are incentivized and fully capable of extracting an undue amount of value. In the case of LSTs, they also make attacking the native protocol cheaper. For example, if there were many LSTs, then an attacker would at least have to attack multiple LST protocols in order to change the validator set enough to attack.

LSTs are incentivized to become monopolies. Quite literally, an LST protocol will be able to capture more value if it becomes a monopoly. A common argument against LSTs is that the network effects of the new token also make it difficult for competing LSTs to gain traction.

In order to defend against monopolies, protocols must make forking and converting an LST profitable.

Using the combined mechanisms above, protocols significantly lower the cost to fork and convert an LST. Third parties can permissionlessly and seamlessly take the profits earned by the LST creator.

This is because the time it takes to convert between two LSTs is now a function of their composition and how much the validator set has changed. The more value that is captured by an LST, the more incentive there is to fork it. It is still possible to form a monopoly, however, only if that monopoly extracts less value than the cost to fork and convert it. This post does not estimate those costs to attempt to quantify the profitability of forking an LST.

Vampire Attacks on Popular LSTs

The following set of incentives for LST creators and delegators emerges:

  1. Pick a quality decentralized subset of validators, since it’s easy and they don’t want to get slashed (all forms of slashing, particularly social)
  2. Pick a similar set of validators to others, since that enables the ability to quickly convert between different LSTs
  3. Control their own LSTs, since that earns revenue

Let’s walk through the simplest example, a vampire attack. Assume there’s a popular LST that’s taking 10% of the revenue that goes to delegators. Anyone, including a bridge or lending protocol, could mirror that LST and either incentivize users or automatically convert them to use its own vampire LST. Since the LSTs are identical, the protocol enables them to be converted to instantly, and they are effectively fungible. Since the vampires are now the owners of the LST, they control the composition and get the revenue.

The original popular LST does have recourse! It can meaningfully change the composition of the LST so itself and the vampire no longer have identical compositions. This comes at a significant cost though. Meaningfully changing the validator set likely means entering the queue, which means it won’t be earning fees. Interestingly, this also risks losing users to other LSTs. This is because delegators wanting to convert to the popular LST would be changing the validator set further, and therefore must wait in the queue. However, the opposite occurs if users want to convert away from the popular LST to the vampire or other existing LST. This is because converting away from an LST that just changed the validator set reduces the difference in the validator set, and thus the converters would be able to skip the queue!

Further Incentives

Here are a few other random details/incentives/externalities.

MEV

As pointed out by Barry Plunkett, it is very likely that stakers will purposefully change the validator set to increase the time that it takes for others to convert an LST. For example, if there is an arbitrage between two LSTs that are not identical, MEV searchers can purposefully change the validator set significantly to increase the rate of change for that unbonding period. This means that conversions between non-identical LSTs must enter the queue.

There are many different factors here that need to be thought through in more detail. While filling the queue has an opportunity cost, the arbitrage has potential profits. The time it takes to convert between LSTs is highly dependent upon how the validator set has changed recently. If a given change in the validator set is net negative for the rate of change, then it can occur instantly.

This post does not cover this, but if you are interested in this topic please reach out to any of the authors.

External LSTs

It’s very difficult if not impossible to create an LST that cannot also be represented as a basket of validator tokens, especially considering that the only way to delegate would be by selecting validator tokens. As long as the LST is redeemable by users, then even external LSTs are forkable. If external LSTs limit the rate at which users can redeem their tokens, then users have more of an incentive to use an LST that does not have such a restriction, for example one created using the native tooling.

Give Inflation/Fees to Some Validators Outside of the Active Set

Protocols want validators to be incentivized to join the active set should the need arise. Giving fees to some validators outside the active set, let’s say 50, ensures that a large set can be included in a Basket without the basket losing out on funds. It also ensures that some validators are always ready to join the active set should the need arise.

Forcing the Usage of an LST for a Bridge

While definitely not required, bridges might also want to add the ability to restrict LSTs that are allowed to be transferred across. As mentioned earlier, if ICA is added, then there would be nothing that stops a bridge from converting the LSTs bridged across to ones that it chooses.

Conclusions

Upsides

  • Arguably, the unbonding mechanism is significantly more secure than the existing one.
  • Third parties get to capture the value that would otherwise be captured by an LST protocol.
  • It becomes trivial to delegate to a large curated subset of validators.
  • Every Bridge is incentivized to control its own LST(s), which means we will have much more decentralized control of the validator set. Besides avoiding the externalities of a monopoly, this might further increase the cost and difficulty of an attack.
  • LST creators are incentivized to come to consensus over a quality decentralized set of validators.

Downsides

  • While having the ability to seamlessly and permissionless fork an existing LST will almost certainly inhibit the monopolization of the validator set to some extent, it’s very unclear by how much.
  • Since these mechanisms are setting in place multiple new incentives, there is essentially a guarantee that there will be unpredicted, and likely undesirable, externalities.
  • This mechanism decreases the speed at which the validator set can change, and there is an incentive for LST creators to mirror each other. As stated above, mechanisms that don’t limit the rate of change of the validator set are not safe, so we will have to switch to something that does anyway.
  • This mechanism might increase the size of the state machine. That being said, some LST logic will exist somewhere, and since that logic is critical to the security of the protocol, having it in a place where social consensus has the ability hardfork and fix bugs or socially slash might be worth it for the same reasons as keeping delegation logic inside the protocol. It’s also worth noting that the current staking and distribution mechanism is wildly inefficient. Its possible that the above mechanisms could actually simplify and reduce the state machine. For example, instead of every delegating claiming rewards frequently, rewards can be automatically deposited once into the collateral for each validator token.
  • Proof of Stake is still fundamentally flawed, but if we switch unbonding mechanisms we can at least buy some time to properly find a solution.

Future Work

  • There could be quite a bit of work done to find an optimal unbonding rate. The incentives almost certainly change with the rate, and how they change is unclear. In fact, while having the ability to fork an existing LST can only hurt monopolies, it’s very unclear to what extent.
  • Actually fixing Proof of Stake.
  • See MEV section
5 Likes

Once again, nice write up!

Why do the tokens need to be bundled together? Why can’t I just buy 100 LST for each of the 5 validators I choose than receiving 1 LST that represents 500 TIA that is evenly distributed across the 5 validators?

If there were only 100 types of LSTs it would make them more fungible than having numerous permutations of LSTs per basket of validators.

Put another way, could the picking and redistribution of stake to validators be something solved in the front end and the chain itself remains simpler.

On a totally different note, given that most attacker will try to exit within the first 10 minutes say, It might be good to have two queues:

  • Cumulative diff in validator set can’t exceed 1/3 in 3 months; and
  • Cumulative diff in validator set can’t exceed 5% in 10 minutes

Also, you’ve mentioned using a cumulative diff queue instead of a net diff queue i.e. if I redelegate 1000 tokens from A to B and then from B to A, the validator set hasn’t changed at all however cumulative diff would think it had changed by 2000 tokens. What’s your view on the tradeoffs

2 Likes

I read through the article, it’s very interesting even if I’m not 100% understand. I want to discuss when should we quit the queue logic. When everyone sees I need 30 days for unbinding, but it’s possible for somebody could initialize unbinding after me but finish unbinding before me, I’m sure I will lose money.

“Give Inflation/Fees to Some Validators Outside of the Active Set”, even if it’s possible, here are a lot of details that need to be discussed.

Generally speaking, I believe it’s better to let a settlement layer or L2 handle these problems. The users of these LSTs should take the risk but not the mainnet validators. Then we should carefully handle the exchange between TIA and LSTs, just like every country has a foreign exchange management department, but I don’t think it’s a technical problem.

1 Like

Great write up guys,

If I have to sum it up, the measures proposed above make it easier for competing LSTs to vampire attack the incumbent to prevent monopolies. While I think this is a good measure, I don’t think it will be completely effective. If we look at chains like the Cosmos Hub where the LSM allows for users to have complete movement between LSTs, the lower fees options are still not widely adopted and vampire attacks on the incumbent might not always be successful. This leads me to conclude that LST monopolies have a substantial social and liquidity component to them.

I think to actually prevent monopolies, alongside a dynamic unbonding period; a more direct approach is required and this should be done at the ICA level.

  1. ICA connections have to be governance gated: I think it is unreasonable that any chain could connect to another chain without prior consent. There are many use cases where ICA could be subjectively undesirable. Governance gating ICA also gives the Celestia chain substantial power to regulate LSTs and prevent monopolies.

  2. It should be more expensive to delegate to the incumbent after a certain threshold. If an ICA account is accumulating stake beyond a certain point the cost for the users to delegate to this account should increase.

  3. This can be implemented through an in-protocol LST tax which should apply to all protocols and scale based on the amount of assets staked through a particular ICA. The scaling factor of the tax should be substantial enough wherein it makes monopolies impossible.

  4. Such a tax implemented alongside the dynamic unbonding period will make it easier for users to switch to cheaper options.

  5. Isn’t a tax the same as lower fees? When a new LST offers a basket with lower fees; the delta between the incumbent and the new LST is limited as the incumbent could lower fees to maintain a monopoly or the liquidity effects of the incumbent could undo the effects of any marginal cost reduction. A tax theoretically has no upper ceiling; meaning this delta could be substantial and impose heavy costs on delegators if they do not switch. This would make monopolies near impossible.

  6. A LST tax could also be used to protect against principal agent based attacks that could affect a LST. This tax could be collected into a community pool to defend the peg of LSTs (or against other economic attacks).

Ultimately, I think the work done in the post is fantastic; and I also like that Celestia is one of the few communities where we are having this discussion before the train has left the station. I strongly believe that the dynamic unbonding period should be accompanied by a dynamic tax that scales by each ICA.

2 Likes

Thanks for reading and putting forth a detailed response! Ideating is really good :blush:

While I think this is a good measure, I don’t think it will be completely effective.

wdyt of this paragraph? I tried to include it twice on purpose to emphasize the point that defending against monopolies does not necessarily mean no monopolies will exist.

The more value an LST captures, the larger the incentive is to fork it. It does not guarantee that no LST monopoly is formed, however it does ensure that if one does, it will extract the minimum amount of value.

Do you think this statement is true?

If we look at chains like the Cosmos Hub where the LSM allows for users to have complete movement between LSTs

If the LSM does allow for freely flowing between LSTs, this is a great point! My understanding of the LSM on the hub is that it doesn’t actually allow the free flowing between LSTs because validator tokens are not fungible. I could be missing something tho!

Even if they are freely flowing, per the second section that describes the dynamic queue doing so without limiting the net change in validator set is extremely unsafe. This is relying entirely on the LST to make attacks expensive!! Security is completely external to the protocol at that point.

These mechanisms minimize the friction to forking significantly while remaining safe, since in theory an LST can be forked and converted to in a single block, which aims squarely at establishing the incentive to minimize harm. We’re trying to avoid the worst case scenario, even long term, as there is no guarantee the best case scenario occurs.

ICA connections have to be governance gated

For other chains this might be an option, but Celestia is governance minimized. The separate issue I have with governance is that as long as we use tokens I don’t think this can be secure. Especially considering that the ones voting are essentially LST protocols, not necessarily validator and certainly not token holders.

LSTs make it significantly cheaper to attack the native protocol. To change to a malicious validator set, attackers no longer have to use the native token. Instead, they just have to attack the LST protocol.

It should be more expensive to delegate to the incumbent after a certain threshold. If an ICA account is accumulating stake beyond a certain point the cost for the users to delegate to this account should increase.

unfortunately I don’t think limiting accounts can be an effective form of regulation long term. Just like we see airdrop farmers manage thousands of accounts, chains can do the same thing and there’s no deterministic mechanism for measuring. Ethereum PoS is a futile effort to prevent DPoS imo

1 Like

Thanks for reading and responding!! :blush:

The flat unbonding period is replaced with a dynamic one. So if you enter the queue, you would unbond before they do. It’s possible that they could batch their conversion of an LST with someone lower in the queue, however they couldn’t do that with a pure unbond if that makes sense.

Generally speaking, I believe it’s better to let a settlement layer or L2 handle these problems.

What I’m describing is effectively improving delegation and the unbinding period. While in theory this state could exist on an L2, considering it’s critical to consensus I think it makes sense to keep it on the L1

1 Like

They don’t have to be and it’s definitely possible that they won’t!! I’m making assumptions based on past usage

Curated subsets of validators are useful because delegators can easily hedge slashing risk and staking profits, while also selecting for broader more subjective preferences such as decentralization, quality of setup, or to fund specific causes.

given that most attacker will try to exit within the first 10 minutes say, It might be good to have two queues:
Cumulative diff in validator set can’t exceed 1/3 in 3 months; and
Cumulative diff in validator set can’t exceed 5% in 10 minutes

This is an interesting idea! The initial proposed curve attempts to do something similar if I’m understanding correctly, but this could an improvement

Also, you’ve mentioned using a cumulative diff queue instead of a net diff queue i.e. What’s your view on the tradeoffs

That’s a great question. The above mech strictly uses a net queue, which naively provides the same amount of security. Could be wrong tho! Without a net queue, then I’m not sure we see this property

Pick a similar set of validators to others, since that enables the ability to quickly convert between different LSTs

If big LSTs get forked, they are more incentivized to change their composition.

Meaningfully changing the validator set likely means entering the queue, which means it won’t be earning fees. Interestingly, this also risks losing users to other LSTs. This is because delegators wanting to convert to the popular LST would be changing the validator set further, and therefore must wait in the queue. However, the opposite occurs if users want to convert away from the popular LST to the vampire or other existing LST. This is because converting away from an LST that just changed the validator set reduces the difference in the validator set, and thus the converters would be able to skip the queue!

Then we should clarify that the queue is a shared queue or each address has its private queue. if it’s a shared queue, we can’t expect it to get cleared, then when should a user exit? if it’s a private queue, the order can easily get into trouble.

When to start and end a batch has similar problems.

note:

I wrote this whole big old thing, and came to the conclusion that this method is just too coercive and that the higher value problem is likely pricing. The market driven approach described originally here, is better than the approach that I describe below.

I think Celestia should be her own liquid staking monopoly

To me, this proposal seems quite complex, and to @evan’s credit, he understands deeply that it doesn’t solve the fundamental underlying problem, “actually fix pos”. I tend to agree. Celestia is a particular joy because Of all of the work that has gone into addressing base layer problems. I’m not entirely sure that this proposal doesn’t actually create additional problems, there are two pieces of it that to me seem to be in inherent tension:

  • Vampire attacks
  • Slowing the rate of validator set change

The proposal seems to riff on the liquid staking module in a number of ways, and propose what seemed to me to be genuine improvements to it.

However, I believe that the proposed design does reduce the primacy of Celestia, especially when liquid staking is considered.

If Celestia is to make significant modifications to staking logic, I think it might make a lot more sense to make liquid staking a native part of Celestia, instead of building and adapter for liquid staking third parties.

historical background

To my knowledge, liquid staking was recognized as a need when it was very clear that centralized exchanges would set up liquid staking no matter what anybody did. I also think that this remains true to this day, and that there’s a real argument for that. However, since we are already having discussions about reworking the staking module, which indeed needs rework, I think that it may be a good idea to take a step back and examine whether or not it is favorable for celestia to support liquid staking by third party chains at all.

problems with liquid staking

To me, the biggest problem with liquid staking has always been that it pays the same as regular staking. Liquid staking does not provide the same level of economic security and stability as regular staking. Certainly, the ICA route for liquid staking is significantly better than using a multi-signature wallet or even a single signature wallet, to create a liquid staking setup.

But it still has the very real problem that regular stakers accept immobility of their tokens, as well as slash risk.

Liquid stakers on the other hand take far less risk. They accept only slash risk, and in analyses we’ve done internally, it’s actually quite possible to create an insurance fund to relieve liquid stakers of slash risk, or partially relieve them. This is because slashes are relatively rare.

why do delegators get slashed?

We slash delegators as a punishment for choosing a non-performant validator. Currently there are two slashing conditions:

  • Uptime
  • Equivocation (double signing)

But if it’s possible to to to eliminate slash risk through an insurance fund as described above, and I fully believe that it is, then liquid stakers gain a total advantage – regular stakers are accepting both immobility of their tokens and slash risk, while liquid stakers accept none of this.

proposed model

Celestia should never open the staking ICA port. Instead, Celestia should implement liquid staking at the base layer, and provide a mechanism that can eliminate staking rewards for known liquid staking products using multisignature accounts.

This is the best of all worlds for Celestia because Celestia will never need to accept the risks that come with liquid staking products other than her own.

Celestia’s liquid staking product should definitely include a tax. The tax could go to the community pool, or simply never be issued. This way, traditional stakers will be able to benefit fully from the immobility that they’ve chosen to accept.

I’d estimate that immobility is worth 60%, and I am getting this number from The fact that slashing is really genuinely rare and that most slashing events are very small, typically downtime slashes. The majority of double signing events that have occurred in Cosmos have been operator error. I can’t really say that there’s been a Byzantine equivocation slash, except for a very interesting incident on CRO where a validator was compromised and eventually the attacker double signed (or threatened to, I forget which) And that was very Byzantine but the attacker was most certainly not threatening consensus in the traditional understanding of equivocation.

benefits of proposed model

  • Celestia controls her own security.
  • Reward to traditional and liquid stakers is in line with the amount of risk taken by both.
  • Anyone is free to build restaking on top of Celestia’s liquid staked Tia.
  • Less danger will be posed by restaked Tia, because there are fewer layers.

downsides

  • I don’t like the heavy-handed approach of using governance to mark liquid staking protocols… But if we are viewing them as a security and primacy risk (and we should) then I do think it makes sense.
    • One way to sidestep this would be to do nothing, as multisig risk is historically greater than slash risk or immobility risk.
    • This also goes against governance minimization, and could create contentious votes.
    • Even evan’s proposal good create incentives for liquid staking protocols to never move away from multi-signature wallets that provide the full benefit of direct staking.
    • I’m walking away from this post fairly conflicted about how to deal with even the current incentive problems of liquid staking protocols.
  • The development burden of the actual liquid staking protocol is placed on Celestia (but I think that it is less than what is described in the above post)

be the monopoly


subsequent literal shower thoughts

The biggest weakness of this proposal is that it can do very little to deal with the risks posed by multi-signature liquid staking tokens without a governance mechanism.

The biggest strength that I see in @evan’s proposal is that it is fully Market driven. While at first I saw a tension between slowing the rate of change in the validator set and the rapid conversion of lsts, in the shower I realized that this combination allows for the market to make determinations, and frankly that’s preferable to decisions made by governance. Maybe the only caveat here is that so far in most scenarios for liquid staking, the market has tended toward natural monopolies.

My post needs to be edited tomorrow, to remove the governance crutch. If I can’t figure out how to remove that crutch, then the original proposal by Evan is probably better than mine. I still agree with @LST that taxes are good in this scenario, as I do think that it’s possible to create very meaningful methods to limit or even eliminate the risk of slashes.

Awesome blog post! I am wondering on what ideas can be born out of the following bullet points: :point_down:

If we have a hypothesis that every bridge is going to have their own version of the LST, how should this look like from a technical pov? I understand that the easiest way to achieve this is a multisig based or ICA/stride upcoming way. But what other ways am I missing here?

Is it correct to assume to snark accounts can be the foundation layer for future LSTs as Aiden mentioned in this podcast?

Another point I am missing understanding is with custom LSTs, especially regarding the programmability of them. Let’s say a rollup has its own bridge to the DA Layer. Do you think it is feasible for rollup developers to enforce a rule that they get, let’s say, a 1% cut from their LST holders to fund whatever their community decides is necessary? While this is not a credibly neutral LST, we can see that there are users preferring more custom LSTs or versions of them (e.g., Blast).

If we come to the point of custom LST, how hard it will be for such rollup teams to develop their own version? Do you envision a module/library/framework for this? So in the end rollup teams can have their own curated list of aligned validators

1 Like

To my knowledge, this is correct.

This is another problem with my proposal. I don’t know of another decent way to LST protocol accounts, and they could always be broken up into many many many many many many accounts.

Other than tokens, did you have thoughts on doing governance? I think that especially for larger chains, the current Cosmos style of doing governance is not ideal.

Of course the flip side of this is osmosis, which has used governance extremely heavily since the start and I think has gained a sense of community ownership from the use of governance.

I’m really interested to hear more thoughts on fundamental problems in proof of stake. I guess that this conversation does center on several of them:

  • Governance
  • Misaligned incentives in liquid staking vs regular staking
  • Relative inability of protocols to define liquid staking as they choose to because any account can stake.

Just noting that this particular conversation thread is quite interesting to me, And I think that it’s somehow touches on a well-placed discontent with the current state of governance, since at present, governance votes line up to consensus notes very neatly, that was an intentional feature of the POS design of Cosmos, to my knowledge.

But I think that this has created perverse incentives over time. There are persistent rumors that governance votes are sold as a profit model for validators choosing to charge 0% for their services.

I think that I believe these rumors.

Although I was initially against it, because the validators serve as a sort of a senate for the chain and community, I think that it makes sense to decouple the validation and governance, or allow stakers to decide if they would like their validator to participate in governance on their behalf.

2 Likes

Hey, thanks for the reply. Here are my thoughts:

I do agree with you that a dynamic unbonding period and validator set movement restrictions will prevent a specific type of economic attack wherein the LST protocol is compromised and all stake moves to one validator to perform an attack on Celestia. But if a LST monopoly is compromised, theoretically the attacker doesn’t have to move any stake at all to attack the chain (in a lower capacity, however) .

To the 2nd question, I do not think it will extract the minimum value, it will extract a lower value. Again, my examples are from the Cosmos Hub, where despite there being massively cheaper options; delegators still delegate to the monopoly because of more integrations, greater chance of airdrop inclusion, deeper liquidity, etc.

On the 3rd point, on the Cosmos Hub right now its theoretically possible to undelegate into LSM shares and rebond those shares with other LST protocols. With Quicksilver, any valdiator share should work as it allows for free selection of validators and with pStake I think (might be wrong) they automatically redelegate delegation shares to meet their set.

I do agree that the dynamic queue is a safer version of the LSM.

I think that instead of trying to protect against a monopoly we should just try preventing them. Apart from specific economic attacks LST monopolies have very adverse social consequences. You will essentially have a gatekeeper on all TIA DeFi that is a single team. You will have all validator choice being made by this entity, which will create a client-provider relationship between all Celestia valdiators and one team.

LSTs are an inherently monopolizing instrument, most chains are prey to LST monopolies. These monopolies are enabled by the fundamental flaws in dPos. I think for Celestia the ideal situation would be to have 10-20 decently sized LST providers with varied validator distribution mechanisms and varied product offering/DeFi strategies. This will create maximum benefit for users while minimizing fees and social monopolies.

With regards to ICA being governance gated, there can be any other way to determine ICA eligibility. It could be made into a CIP or determined by any other appropriate decision making mechanism; doesn’t have to be token governance.

You can prevent one chain from launching multiple ICAs, if its gated by some rough social consensus. Every time you establish an ICA connection it is specified over an IBC connection which is between two chains. So if a new ICA connection being proposed is suspected to be a ghost account of an existing provider, then it can easily be rejected; and the provider attempting this be kicked out or penalized.

I think the ICA governance if done via CIPs would be in line with the vision for the ‘Core Values of the Celestia Social Layer’ outlined by Mustafa.

To conclude, I think if we are trying to add logic to Celestia which protects against LST monopolies, it should prevent them. The adverse social consequences of LST monopolies have a 100% chance of happening when compared with an economic attack (which though more harmful) has a lower chance of occurring. I would still very strongly argue for a LST tax, as it acts as a stable liquidity premium for using LST protocols and has a strong effect on completely preventing monopolies (maybe there is a better way to deliver a tax than gating ICA).

If monopolies are absolutely prevented, there would be no need to defend against monopolies.

PS: To absolutely prevent monopolies I am arguing for the LST tax alongside dynamic unbonding (which would be crucial to allow for migration of stake between LSTs).

what about social slashing? considering its critical to PoS’s strategy of maintaining crypto-economic security then I don’t think that it can be ignored.

They accept only slash risk, and in analyses we’ve done internally, it’s actually quite possible to create an insurance fund to relieve liquid stakers of slash risk, or partially relieve them.

I’d be curious to read more about the insurance fund! does there need to be an oracle for payouts?

the market has tended toward natural monopolies

yeah definitely. I purposefully used the word defending against, instead of preventing here. There could definitely still be a monopoly, users just have a more credible threat against that monopoly, specifically when it comes to it absorbing profits. If there’s no value to capture from an LST, then there will likely form a monopoly. However we are avoiding the catastrophy that is a monopoly that captures too much value.

did you have thoughts on doing governance?

until there’s good onchain id, no. even then, social consensus is best even if its really slow.

2 Likes

mind expanding on this attack? is there an attack that doesn’t require a quorum or half quorum of validators to collude, or does it require collusion? The attack I’m describing can occur currently within a single block and no collusion.

yeah definitely fair. after I wrote that comment, I realized that since I did not do any formal calculations this was definitley not a claim I was comfortable making and edited it.

delegators still delegate to the monopoly because of more integrations

if bridges have the ability to fork and instantly convert, then I think we’ll see more bridges create their own. the bridge is already forcing the new mint of an asset, so integrations are technically broken with each bridge path. if users are already bridging point to point (like what we see now in cosmos) then chains might as well create their own LST. especially if it earns them significant revenue. again, if there is no value being captured, then it will likely still be a monopoly. If there is value being captured, then there is nothing an LST creator can do to stop a bridge from forking them an arbitrary number of times.

in the future, its possible there are no breaking upgrades for a year plus, do you think that would be sufficient? Any controversy would also delay decisions. I don’t think social consensus is a viable mechanism for determining these things. As we just saw, without ICA liquid staked assets still occur.

To conclude, I think if we are trying to add logic to Celestia which protects against LST monopolies, it should prevent them.

tbc, that is the point of this post! to attempt to prevent them. but since there are no guaranteed ways of doing that, it at least reduces the harm done. in the best scenario, it also prevents them. in the worst, it protects against them.

it acts as a stable liquidity premium for using LST protocols and has a strong effect on completely preventing monopolies

is there a way to guarantee that this tax is enforced? are we just incentivizing using a multisig or otherwise external lsts?

also fwiw, I totally get why monopolies are bad :slight_smile:

someone has to pick the validator subset used for the LST in order for the LST to work. this can’t be done on celestia imo since one of celestia’s goals is to be governance minimized. the only other mechanism would be for social consensus to effectively hardcode the validator set, which ruins the protocol being permissionless.

1 Like

So actually this relates to my shower thoughts. In the end, although I enjoyed writing the post, I may have come around fully to your point of view, the really critical distinction is that you’ve come up with a way to make the determination of vals by lsts fully Market driven. By the time I had finished my post, I had already struck on my own misgivings about relying too heavily on governance.

By the time I had finished my shower, I had realized that that was actually the critical difference between the two plans.

Neither plan does much about cexes, pricing, or multisigs (I consider pricing a really critical item) but the one you posted had the really clear advantage of being Market driven.

You figure that we might be able to fix pricing?

Mobility is worth a lot.

Slashes are worth a lot less.

Current lsts allow for mobility and slash exposure. Imo, it won’t be long until the lst provides for a slash insurance fund.

I’m in support of social slashing. Does Celestia currently support the ability to slash socially?

PS: when you say social slashing, are you referring to a governance proposal to slash a particular validator?

I also think that social slashing needs to be considered very carefully, but overall I’m in support of its implementation. The best example of this would be sunflower validator on the hub. They are clearly Byzantine but they have enough vote power to stay there.

1 Like

by social slashing, I mean a hardfork that includes a script to manually change the state to slash validators, so yeah it does support it but every blockchain does by that definition.

Thanks for the writeup on this!

The point for queue cutting in line was quite cool! In my head, I’m thinking of it as a “coincidence of wants” situation, which we want to enable because:

  • For security we want rate limits in validator power changes
  • We want to enable more secure methods to move between validators, to lower the LST advantage

I don’t get how were solving the problem if there are anti-competitive strategies deployed by LST’s (or even just lack of update for queues from LST’s)

Lets assume a large LST (stLargeTia) exists in the first place. I want to make a new LST (vampTia).

vampTia wants to incentivize converting stLargeTia to vampTia. They want to do this by making a contract that:

  • Sends your stLargeTia to vampTia contract
  • vampTia starts unbond of stLargeTia
  • {someone comes in with some Tia capital to clear things, could even be protocol controlled} to make a new vampTia stake position that offsets stLargeTia. The stLargeTia unbond “should be instant” due to offsetting, but it depends on the stLargeTia updating for this. Then the liquidity here is freed up.

However there are two things stLargeTia could do that are anti-competitive here,

  • They could adversarially delay how long it takes from user action, until an unbonding request starts
  • They could just “not implement” the queue cancellation logic. E.g. they send underlying funds to you after the initial unbond time finishes, not when the unbond truly completes. (or they send funds back to you at e.g. max(1 month, unbond queue time) – which gives vampTia the same liquidity crunch problem again)

Do we rely on the social layer for these issues still? I think not implementing queue cancellation will be a serious issue

3 Likes

yeah very true! This deserves more thought, as it is likely more complex than what I mentioned above. The comment below makes the assumption that the delay is universal for all users, but that doesn’t have to be true.

2 Likes