Skip to content

Commit 27f3aab

Browse files
committed
docs: slight restructure
1 parent 60f87f4 commit 27f3aab

File tree

4 files changed

+36
-22
lines changed

4 files changed

+36
-22
lines changed

docs-site/docs/intro.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,22 @@
11
---
22
sidebar_position: 1
3-
---
3+
title: Introduction
44

5-
# Getting Started
5+
---
66

77
## Blobs
88

9-
The Blobs project exposes the **data availability** capabilities of [Polkadot](https://polkadot.network) and [Kusama](https://kusama.network) for general use. Use-cases include rollups and inscriptions.
9+
The Blobs project exposes **sequencing** and **data availability** capabilities of [Polkadot](https://polkadot.network) and [Kusama](https://kusama.network) for general use. Use-cases include rollups and inscriptions.
1010

1111
The Blobs codebase is located at https://github.com/thrumdev/blobs. There is a live parachain on Kusama with Parachain ID 3338 running the Blobs runtime.
1212

13-
Blobs enables users to submit arbitrary data to the chain and receive guarantees about the availability of that data. Namely:
13+
In this documentation site, we'll often use the term Polkadot to refer to the Polkadot Relay Chain - the hub chain which provides security for everything running on Polkadot. Kusama runs on the same technology as Polkadot, so the Kusama version of Blobs works identically to the Polkadot version, just with a different network. You can mentally substitute "Polkadot" for "Kusama" when thinking about the Kusama version of Blobs.
14+
15+
Blobs enables users to submit arbitrary data to the chain and receive guarantees about the availability of that data, as well as proofs of the order in which data were submitted. Namely:
1416
1. The data can be fetched from the Polkadot/Kusama validator set for up to 24 hours after submission and cannot be withheld.
1517
2. A commitment to the data's availability is stored within the blobchain and used as a proof of guarantee (1) to computer programs, such as smart contracts or Zero-Knowledge circuits.
1618

17-
Data Availability is a key component of Layer-2 scaling approaches, and is already part of Polkadot and Kusama for use in securing Parachains. Blobs will bring this capability out to much broader markets.
19+
Data Availability is a key component of Layer-2 scaling approaches, and is already part of Polkadot and Kusama for use in securing Parachains. Blobs will bring this capability out to much broader markets.
1820

1921
Blobs makes a **relay-chain token utility commitment** now and forever. Submitting blobs will always make use of the DOT token on Polkadot and the KSM token on Kusama, as this is the approach with the least user friction.
2022

@@ -24,4 +26,4 @@ Blobs supports a variety of rollup SDKs out of the box.
2426
- [x] Rollkit
2527
- [x] Sovereign SDK
2628
- [ ] OP Stack
27-
- [ ] Polygon ZK-EVM
29+
- [ ] Polygon ZK-EVM

docs-site/docs/node-operators/index.md renamed to docs-site/docs/node-operators/getting-started.md

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
---
2-
sidebar_position: 2
3-
title: Node Operators
2+
title: Getting Started
43
---
54

65
## Releases
@@ -19,15 +18,6 @@ You can pass arguments to each one of these underlying nodes with the following
1918
./sugondat-node --arg-for-blobs --arg2-for-blobs -- --arg-for-relay --arg2-for-relay
2019
```
2120

22-
## Blobs and Storage Usage
23-
24-
Blobs can potentially use enormous amounts of disk space under heavy usage. This is because all historical blobs are stored within the blobchain's history. While the Polkadot and Kusama expunge ancient blobs after 24 hours, any full node of the blobchain will have all the blobs going back to the genesis, as well as all of the other block data.
25-
26-
To avoid this issue, run with `--blocks-pruning <number>`, where `number` is some relatively small value such as `1000` to avoid keeping all historical blobs.
27-
28-
However, there is still a need for archival nodes. The main reason is that many rollup SDKs do not have any form of p2p and their nodes synchronize by downloading ancient blobs from the data availability layer's p2p network. Without someone keeping all ancient blocks, those SDKs
29-
would unfortunately stop working.
30-
3121
## Hardware Requirements
3222

3323
For full nodes, we currently recommend:
@@ -36,8 +26,11 @@ For full nodes, we currently recommend:
3626
- 512G Free Disk Space (1TB preferred)
3727
- Stable and fast internet connection. For collators, we recommend 500mbps.
3828

39-
The disk space requirements can be reduced by
40-
1. Running with `--blocks-pruning` on both
29+
The disk space requirements can be reduced by either
30+
1. Running with `--blocks-pruning 1000` in the arguments for both the parachain and relay chain arguments
31+
2. Running with `--blocks-pruning 1000 --relay-chain-light-client` in the parachain arguments.
32+
33+
Option (2) uses a relay chain light client rather than a full node. This may lead to data arriving at the node slightly later, as it will not follow the head of the relay chain as aggressively as a full node would. This option is not recommended for collators or RPC nodes.
4134

4235
## Building From Source
4336

@@ -51,8 +44,7 @@ Building from source requires a few packages to be installed on your system. On
5144
apt install libssl-dev protobuf-compiler cmake make pkg-config llvm clang
5245
```
5346

54-
On other linux distros or Mac, use your package manager's equivalents.
55-
47+
On other Linux distributions or macOS, use your package manager's equivalents.
5648

5749
Building the node:
5850
```bash
@@ -65,3 +57,12 @@ Running the node:
6557
```bash
6658
target/release/sugondat-node --chain sugondat-kusama
6759
```
60+
61+
## Blobs and Storage Usage
62+
63+
Blobs can potentially use enormous amounts of disk space under heavy usage. This is because all historical blobs are stored within the blobchain's history. While the Polkadot and Kusama expunge ancient blobs after 24 hours, any full node of the blobchain will have all the blobs going back to the genesis, as well as all of the other block data.
64+
65+
To avoid this issue, run with `--blocks-pruning <number>`, where `number` is some relatively small value such as `1000` to avoid keeping all historical blobs.
66+
67+
However, there is still a need for archival nodes. The main reason is that many rollup SDKs do not have any form of p2p networking and nodes built with those platforms synchronize by downloading ancient blobs from the data availability layer's p2p network. Without someone keeping all ancient blocks, those SDKs
68+
would unfortunately stop working. This is beyond the expectations of a data availability layer, as it is intractable to scale to 1Gbps while requiring data availability nodes to keep every historical blob. When all major rollup SDKs have introduced p2p synchronization, the potential storage burden on data availability full nodes will be reduced.
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
---
2+
title: Data Availability
3+
---

docs-site/sidebars.js

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,15 @@
1414
/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */
1515
const sidebars = {
1616
// By default, Docusaurus generates a sidebar from the docs folder structure
17-
docsSidebar: [{type: 'autogenerated', dirName: '.'}],
17+
docsSidebar: [
18+
'intro',
19+
{type: 'category', label: "Protocol", items: [
20+
{type: 'autogenerated', dirName: 'protocol'}
21+
]},
22+
{type: 'category', label: "Node Operators", items: [
23+
{type: 'autogenerated', dirName: 'node-operators'}
24+
]}
25+
],
1826
};
1927

2028
export default sidebars;

0 commit comments

Comments
 (0)