Questions regarding p2p and its design

Hi all, I’m a big fan of Celestia. I have some questions regarding the current Celestia p2p design and some minor DAS questions.

  1. p2p design

It seems the p2p is utilizing (IPFS) DHT implementation to do peer discovery and disseminate sampled shares. As mentioned in this article, there are pros and cons to different p2p design. What’s the rationale to choose DHT over other designs and what are your thoughts on how to mitigate the threats (i.e., sybil attack, Byzantine behavior) mentioned in the article?

  1. Do validator download full block data? Or validator just do DAS just like light clients? Or they do both (download full + DAS)?

  2. According to the different levels of light client security outlined here, Celestia is currently providing a level 4 security?

3 Likes

for #2, I think validators download full block data so they won’t need to do DAS like light clients.

for #1, DHT based peer discovery, I don’t yet know how it’s handled when network partitions and nodes are not able to reach the peer where data is.

2 Likes

The p2p system is currently split in two between between “consensus” and “light clients”. See diagrams in Setting up a Celestia bridge node | Celestia Docs and The lifecycle of a celestia-app transaction | Celestia Docs

Consensus p2p is done by celestia-core repo and is a fork of CometBFT (itself the mainline fork of tendermint). Validators gossip full blocks to each other and do full tendermint consensus. The DA sampling p2p is done by celestia-node repo and uses libp2p.

3 Likes

Hey @NIC619. @Wondertan, a.k.a. hlibp2p, is here to answer your questions.

  1. The sampling network does not use the DHT design mentioned in the article. The article states: “where each participant stores a portion of the overall data stored in the hash table” and this is not how our system works. The article only defines two designs, while we are a “secret third thing”. In short, we use DHT only for peer discovery, with dissemination happening on the protocol called Shrex, which will soon be substituted by Shwap(expect a CIP soon)
  2. Originally, we wanted validators to do sampling only by default and even attempted that, but we had to pivot due to limitations in CometBFT software and increased complexity. However, we plan to Merge core+node networks in the future, and we will revisit this again.
  3. We publicly state that we are Level 3, while technically, we have Layer 4 implemented. The issue is that the reconstruction protocol is very inefficient, and we couldn’t prove it on the current 8MB block limit; only 2MB blocks were proved to work. We will soon revisit this as we deploy the new Shwap sampling protocol.
3 Likes

Thanks @Wondertan ! These are very clear and helpful answer. Looking forward to the Shwap CIP.

1 Like

Hear hear! I think that this is a fantastic idea because single repository designs make for easier maintenance, and greater exposure of developers to each stack layer.

1 Like