Full-scale Danksharding requires a lot of small pieces of data (samples), and attestations to said data, to be communicated efficiently across Ethereum's Consensus Layer p2p network.
Some design questions around how this new information should be communicated is still up in the air and is known as the Data Availability Sampling Networking problem. Danny wrote a Request For Proposals post which provides background to learn more about the problem.
Model DAS is a forked repository from DAS Prototype that aims to model and benchmark a possible solution to the Data Availability Sampling Networking Problem through Discovery v5 overlay subnetworks.
I'm currently working on implementing the networking stack needed to create a Secure K-DHT discv5 overlay to support Data Availability Sampling. Concepts from DAS Playground are going to be implemnted here.
Check out Model DAS's Project Proposal.
This repo contains various prototypes of the core DAS components - dissemination and random sampling.
cargo run -- -n <num-servers> -p <start-listen-port> -t <topology> [simulation-case] [ARGS]
This will spin up num-servers
of discv5 servers starting at port start-listen-port
all the way to start-listen-port
+ num-servers
, and then start simulation-case
.
cargo run -- -n 500 --topology uniform disseminate -n 256 --batching-strategy 'bucket-wise' --forward-mode 'FA' --replicate-mode 'RS' --redundancy 1
cargo run -- -n 500 -t uniform --timeout 6 sample --validators-number 2 --samples-per-validator 75 --parallelism 30
Snapshots allow saving network configurations along with various RNG seeds to have more consistent measurements and for debugging. Use --snapshot
flag with values new
, last
, and specific timecode eg. 2022-11-08-11:46:09
. Snapshots are saved in --cache-dir
folder default value = ./data
.
cargo run -- -n 500 --topology uniform --snapshot last disseminate -n 256