You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Blobs project exposes the**data availability** capabilities of [Polkadot](https://polkadot.network) and [Kusama](https://kusama.network) for general use. Use-cases include rollups and inscriptions.
9
+
The Blobs project exposes **sequencing** and**data availability** capabilities of [Polkadot](https://polkadot.network) and [Kusama](https://kusama.network) for general use. Use-cases include rollups and inscriptions.
10
10
11
11
The Blobs codebase is located at https://github.com/thrumdev/blobs. There is a live parachain on Kusama with Parachain ID 3338 running the Blobs runtime.
12
12
13
-
Blobs enables users to submit arbitrary data to the chain and receive guarantees about the availability of that data. Namely:
13
+
In this documentation site, we'll often use the term Polkadot to refer to the Polkadot Relay Chain - the hub chain which provides security for everything running on Polkadot. Kusama runs on the same technology as Polkadot, so the Kusama version of Blobs works identically to the Polkadot version, just with a different network. You can mentally substitute "Polkadot" for "Kusama" when thinking about the Kusama version of Blobs.
14
+
15
+
Blobs enables users to submit arbitrary data to the chain and receive guarantees about the availability of that data, as well as proofs of the order in which data were submitted. Namely:
14
16
1. The data can be fetched from the Polkadot/Kusama validator set for up to 24 hours after submission and cannot be withheld.
15
17
2. A commitment to the data's availability is stored within the blobchain and used as a proof of guarantee (1) to computer programs, such as smart contracts or Zero-Knowledge circuits.
16
18
17
-
Data Availability is a key component of Layer-2 scaling approaches, and is already part of Polkadot and Kusama for use in securing Parachains. Blobs will bring this capability out to much broader markets.
19
+
Data Availability is a key component of Layer-2 scaling approaches, and is already part of Polkadot and Kusama for use in securing Parachains. Blobs will bring this capability out to much broader markets.
18
20
19
21
Blobs makes a **relay-chain token utility commitment** now and forever. Submitting blobs will always make use of the DOT token on Polkadot and the KSM token on Kusama, as this is the approach with the least user friction.
20
22
@@ -24,4 +26,4 @@ Blobs supports a variety of rollup SDKs out of the box.
Blobs can potentially use enormous amounts of disk space under heavy usage. This is because all historical blobs are stored within the blobchain's history. While the Polkadot and Kusama expunge ancient blobs after 24 hours, any full node of the blobchain will have all the blobs going back to the genesis, as well as all of the other block data.
25
-
26
-
To avoid this issue, run with `--blocks-pruning <number>`, where `number` is some relatively small value such as `1000` to avoid keeping all historical blobs.
27
-
28
-
However, there is still a need for archival nodes. The main reason is that many rollup SDKs do not have any form of p2p and their nodes synchronize by downloading ancient blobs from the data availability layer's p2p network. Without someone keeping all ancient blocks, those SDKs
29
-
would unfortunately stop working.
30
-
31
21
## Hardware Requirements
32
22
33
23
For full nodes, we currently recommend:
@@ -36,8 +26,11 @@ For full nodes, we currently recommend:
36
26
- 512G Free Disk Space (1TB preferred)
37
27
- Stable and fast internet connection. For collators, we recommend 500mbps.
38
28
39
-
The disk space requirements can be reduced by
40
-
1. Running with `--blocks-pruning` on both
29
+
The disk space requirements can be reduced by either
30
+
1. Running with `--blocks-pruning 1000` in the arguments for both the parachain and relay chain arguments
31
+
2. Running with `--blocks-pruning 1000 --relay-chain-light-client` in the parachain arguments.
32
+
33
+
Option (2) uses a relay chain light client rather than a full node. This may lead to data arriving at the node slightly later, as it will not follow the head of the relay chain as aggressively as a full node would. This option is not recommended for collators or RPC nodes.
41
34
42
35
## Building From Source
43
36
@@ -51,8 +44,7 @@ Building from source requires a few packages to be installed on your system. On
51
44
apt install libssl-dev protobuf-compiler cmake make pkg-config llvm clang
52
45
```
53
46
54
-
On other linux distros or Mac, use your package manager's equivalents.
55
-
47
+
On other Linux distributions or macOS, use your package manager's equivalents.
Blobs can potentially use enormous amounts of disk space under heavy usage. This is because all historical blobs are stored within the blobchain's history. While the Polkadot and Kusama expunge ancient blobs after 24 hours, any full node of the blobchain will have all the blobs going back to the genesis, as well as all of the other block data.
64
+
65
+
To avoid this issue, run with `--blocks-pruning <number>`, where `number` is some relatively small value such as `1000` to avoid keeping all historical blobs.
66
+
67
+
However, there is still a need for archival nodes. The main reason is that many rollup SDKs do not have any form of p2p networking and nodes built with those platforms synchronize by downloading ancient blobs from the data availability layer's p2p network. Without someone keeping all ancient blocks, those SDKs
68
+
would unfortunately stop working. This is beyond the expectations of a data availability layer, as it is intractable to scale to 1Gbps while requiring data availability nodes to keep every historical blob. When all major rollup SDKs have introduced p2p synchronization, the potential storage burden on data availability full nodes will be reduced.
0 commit comments