Skip to content

Commit

Permalink
Add Dprint
Browse files Browse the repository at this point in the history
  • Loading branch information
MiguelDD1 committed Aug 4, 2023
1 parent 7110f61 commit 391b1fa
Show file tree
Hide file tree
Showing 6 changed files with 111 additions and 98 deletions.
5 changes: 4 additions & 1 deletion .github/workflows/default.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,10 @@ jobs:

- name: Install Protoc
uses: arduino/setup-protoc@v1


- name: Check `TOML`, `JSON`, and `MarkDown` formatting
uses: dprint/[email protected]

- name: Check code formatting
uses: actions-rs/cargo@v1
with:
Expand Down
2 changes: 1 addition & 1 deletion .rustfmt.toml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
match_block_trailing_comma = true
use_field_init_shorthand = true
edition = "2021"
hard_tabs = true
hard_tabs = true
76 changes: 30 additions & 46 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,77 +1,61 @@
[package]
name = "avail-light"
version = "1.5.0"
edition = "2021"
authors = ["Avail Team"]
edition = "2021"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
kate-recovery = { version = "0.8", git = "https://github.com/availproject/avail-core", tag = "da-primitives/v0.4.6" }
avail-subxt = { git = "https://github.com/availproject/avail.git", tag = "v1.6.1-rc3" }
chrono = "0.4.19"
dusk-plonk = { git = "https://github.com/availproject/plonk.git", tag = "v0.12.0-polygon-2" }
url = "2.2.2"
tokio = { version = "1.21.2", features = ["full"] }
tokio-stream = "0.1.11"
futures = { version = "0.3.15", default-features = false, features = ["std", "thread-pool"] }
futures-util = "0.3.17"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0.68"
hyper = { version = "0.14.23", features = ["full", "http1"] }
hyper-tls = "0.5.0"
kate-recovery = { version = "0.8", git = "https://github.com/availproject/avail-core", tag = "da-primitives/v0.4.6" }
num = "0.4.0"
rand = "0.8.4"
regex = "1.5"
num = "0.4.0"
futures = { version = "0.3.15", default-features = false, features = [
"std",
"thread-pool",
] }
chrono = "0.4.19"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0.68"
tokio = { version = "1.21.2", features = ["full"] }
tokio-stream = "0.1.11"
url = "2.2.2"

libipld = { version = "0.12.0", default-features = false, features = [
"dag-cbor",
] }
multihash = { version = "0.14.0", default-features = false, features = [
"blake3",
"sha3",
] }
libp2p = { version = "0.51.0", features = ["full"] }
thiserror = "1.0.37"
anyhow = "1.0.41"
tempdir = "0.3.7"
ed25519-dalek = "1.0.1"
async-std = { version = "1.12.0", features = ["attributes"] }
clap = { version = "4.3.0", features = ["cargo"] }
codec = { package = "parity-scale-codec", version = "3.0.0", default-features = false, features = ["derive", "full", "bit-vec"] }
confy = "0.4.0"
ed25519-dalek = "1.0.1"
hex = "0.4"
libipld = { version = "0.12.0", default-features = false, features = ["dag-cbor"] }
libp2p = { version = "0.51.0", features = ["full"] }
multihash = { version = "0.14.0", default-features = false, features = ["blake3", "sha3"] }
num_cpus = "1.13.0"
prometheus-client = "0.19.0"
rocksdb = { version = "0.17.0", features = ["snappy", "multi-threaded-cf"] }
threadpool = "1.8.1"
sp-core = "6.0.0"
codec = { package = "parity-scale-codec", version = "3.0.0", default-features = false, features = [
"derive",
"full",
"bit-vec",
] }
scale-info = { version = "2.0.0", features = ["bit-vec"] }
hex = "0.4"
warp = "0.3.2"
smallvec = "1.6.1"
sp-core = "6.0.0"
tempdir = "0.3.7"
thiserror = "1.0.37"
threadpool = "1.8.1"
tracing = "0.1.35"
tracing-subscriber = { version = "0.3.15", features = ["json"] }
prometheus-client = "0.19.0"
clap = { version = "4.3.0", features = ["cargo"] }
smallvec = "1.6.1"
warp = "0.3.2"

openssl = { version = "0.10", features = ["vendored"] }
void = "1.0.2"
itertools = "0.10.5"
base64 = "0.21.0"
mockall = "0.11.3"
async-trait = "0.1.66"
base64 = "0.21.0"
hex-literal = "0.4.0"
itertools = "0.10.5"
mockall = "0.11.3"
openssl = { version = "0.10", features = ["vendored"] }
pcap = "1.1.0"
uuid = { version = "1.3.4", features = [
"v4",
"fast-rng",
"macro-diagnostics",
] }
uuid = { version = "1.3.4", features = ["v4", "fast-rng", "macro-diagnostics"] }
void = "1.0.2"

[features]
network-analysis = []
Expand Down
77 changes: 41 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,24 +13,24 @@

`avail-light` is a data availability light client with the following functionalities:

* Listening on the Avail network for finalized blocks
* Random sampling and proof verification of a predetermined number of cells (`{row, col}` pairs) on each new block. After successful block verification, confidence is calculated for a number of *cells* (`N`) in a matrix, with `N` depending on the percentage of certainty the light client wants to achieve.
* Data reconstruction through application client (WIP).
* HTTP endpoints exposing relevant data, both from the light and application clients
- Listening on the Avail network for finalized blocks
- Random sampling and proof verification of a predetermined number of cells (`{row, col}` pairs) on each new block. After successful block verification, confidence is calculated for a number of _cells_ (`N`) in a matrix, with `N` depending on the percentage of certainty the light client wants to achieve.
- Data reconstruction through application client (WIP).
- HTTP endpoints exposing relevant data, both from the light and application clients

### Modes of Operation

1. **Light-client Mode**: The basic mode of operation and is always active no matter the mode selected. If an `App_ID` is not provided (or is =0), this mode will commence. On each header received the client does random sampling using two mechanisms:

1. DHT - client first tries to retrieve cells via Kademlia.
2. RPC - if DHT retrieve fails, the client uses RPC calls to Avail nodes to retrieve the needed cells. The cells not already found in the DHT will be uploaded.
1. DHT - client first tries to retrieve cells via Kademlia.
2. RPC - if DHT retrieve fails, the client uses RPC calls to Avail nodes to retrieve the needed cells. The cells not already found in the DHT will be uploaded.

Once the data is received, light client verifies individual cells and calculates the confidence, which is then stored locally.
Once the data is received, light client verifies individual cells and calculates the confidence, which is then stored locally.

2. **App-Specific Mode**: If an **`App_ID` > 0** is given in the config file, the application client (part ot the light client) downloads all the relevant app data, reconstructs it and persists it locally. Reconstructed data is then available to accessed via an HTTP endpoint. (WIP)

3. **Fat-Client Mode**: The client retrieves larger contiguous chunks of the matrix on each block via RPC calls to an Avail node, and stores them on the DHT. This mode is activated when the `block_matrix_partition` parameter is set in the config file, and is mainly used with the `disable_proof_verification` flag because of the resource cost of cell validation.
**IMPORTANT**: disabling proof verification introduces a trust assumption towards the node, that the data provided is correct.
**IMPORTANT**: disabling proof verification introduces a trust assumption towards the node, that the data provided is correct.

## Installation

Expand Down Expand Up @@ -66,7 +66,7 @@ bootstraps = [["12D3KooWMm1c4pzeLPGkkCJMAgFbsfQ8xmVDusg272icWsaNHWzN", "/ip4/127
Now, run the client:

```bash
cargo run -- -c config.yaml
cargo run -- -c config.yaml
```

## Config reference
Expand Down Expand Up @@ -171,12 +171,12 @@ max_kad_provided_keys = 1024

## Notes

* When running the first light client in a network, it becomes a bootstrap client. Once its execution is started, it is paused until a second light client has been started and connected to it, so that the DHT bootstrap mechanism can complete successfully.
* Immediately after starting a fresh light client, block sync is executed to a block depth set in the `sync_blocks_depth` config parameter. The sync client is using both the DHT and RPC for that purpose.
* In order to spin up a fat client, config needs to contain the `block_matrix_partition` parameter set to a fraction of matrix. It is recommended to set the `disable_proof_verification` to true, because of the resource costs of proof verification.
* `sync_blocks_depth` needs to be set correspondingly to the max number of blocks the connected node is caching (if downloading data via RPC).
* Prometheus is used for exposing detailed metrics about the light client
* In order to use network analyzer, the light client has to be compiled with `--features 'network-analysis'` flag; when running the LC with network analyzer, sufficient capabilities have to be given to the client in order for it to have the permissions needed to listen on socket: `sudo setcap cap_net_raw,cap_net_admin=eip /path/to/light/client/binary`
- When running the first light client in a network, it becomes a bootstrap client. Once its execution is started, it is paused until a second light client has been started and connected to it, so that the DHT bootstrap mechanism can complete successfully.
- Immediately after starting a fresh light client, block sync is executed to a block depth set in the `sync_blocks_depth` config parameter. The sync client is using both the DHT and RPC for that purpose.
- In order to spin up a fat client, config needs to contain the `block_matrix_partition` parameter set to a fraction of matrix. It is recommended to set the `disable_proof_verification` to true, because of the resource costs of proof verification.
- `sync_blocks_depth` needs to be set correspondingly to the max number of blocks the connected node is caching (if downloading data via RPC).
- Prometheus is used for exposing detailed metrics about the light client
- In order to use network analyzer, the light client has to be compiled with `--features 'network-analysis'` flag; when running the LC with network analyzer, sufficient capabilities have to be given to the client in order for it to have the permissions needed to listen on socket: `sudo setcap cap_net_raw,cap_net_admin=eip /path/to/light/client/binary`

## Usage and examples

Expand All @@ -192,7 +192,7 @@ Response:

```json
{
"latest_block": 10
"latest_block": 10
}
```

Expand All @@ -208,9 +208,9 @@ Response:

```json
{
"block": 1,
"confidence": 93.75,
"serialised_confidence": "5232467296"
"block": 1,
"confidence": 93.75,
"serialised_confidence": "5232467296"
}
```

Expand All @@ -231,10 +231,10 @@ Response:

```json
{
"block": 46,
"extrinsics": [
"ZXhhbXBsZQ=="
]
"block": 46,
"extrinsics": [
"ZXhhbXBsZQ=="
]
}
```

Expand All @@ -260,7 +260,7 @@ Response:

```json
{
"AppClient": 1
"AppClient": 1
}
```

Expand All @@ -274,9 +274,9 @@ Response:

```json
{
"block_num": 10,
"confidence": 93.75,
"app_id": 1
"block_num": 10,
"confidence": 93.75,
"app_id": 1
}
```

Expand All @@ -290,7 +290,7 @@ Response:

```json
{
"latest_block": 255
"latest_block": 255
}
```

Expand Down Expand Up @@ -338,7 +338,7 @@ Given a block number, it returns the confidence computed by the light client for

> Path parameters:
* `block_number` - block number (requred)
- `block_number` - block number (requred)

#### Responses

Expand All @@ -347,7 +347,7 @@ In case when confidence is computed:
> Status code: `200 OK`
```json
{"block":1,"confidence":93.75,"serialised_confidence":"5232467296"}
{ "block": 1, "confidence": 93.75, "serialised_confidence": "5232467296" }
```

If confidence is not computed, and specified block is before the latest processed block:
Expand All @@ -372,11 +372,11 @@ Given a block number, it retrieves the hex-encoded extrinsics for the specified

> Path parameters:
* `block_number` - block number (requred)
- `block_number` - block number (requred)

> Query parameters:
* `decode` - `true` if decoded extrinsics are requested (boolean, optional, default is `false`)
- `decode` - `true` if decoded extrinsics are requested (boolean, optional, default is `false`)

#### Responses

Expand All @@ -385,15 +385,20 @@ If application data is available, and decode is `false` or unspecified:
> Status code: `200 OK`
```json
{"block":1,"extrinsics":["0xc5018400d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d01308e88ca257b65514b7b44fc1913a6a9af6abc34c3d22761b0e425674d68df7de26be1c8533a7bbd01fdb3a8daa5af77df6d3fb0a67cde8241f461f4fe16f188000000041d011c6578616d706c65"]}
{
"block": 1,
"extrinsics": [
"0xc5018400d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d01308e88ca257b65514b7b44fc1913a6a9af6abc34c3d22761b0e425674d68df7de26be1c8533a7bbd01fdb3a8daa5af77df6d3fb0a67cde8241f461f4fe16f188000000041d011c6578616d706c65"
]
}
```

If application data is available, and decode is `true`:

> Status code: `200 OK`
```json
{"block":1,"extrinsics":["ZXhhbXBsZQ=="]}
{ "block": 1, "extrinsics": ["ZXhhbXBsZQ=="] }
```

If application data is not available, and specified block is the latest block:
Expand All @@ -418,7 +423,7 @@ Retrieves the status of the latest block processed by the light client.

> Path parameters:
* `block_number` - block number (requred)
- `block_number` - block number (requred)

#### Responses

Expand All @@ -427,7 +432,7 @@ If latest processed block exists, and `app_id` is configured (otherwise, `app_id
> Status code: `200 OK`
```json
{"block_num":89,"confidence":93.75,"app_id":1}
{ "block_num": 89, "confidence": 93.75, "app_id": 1 }
```

If there are no processed blocks:
Expand Down
20 changes: 20 additions & 0 deletions dprint.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
{
"incremental": true,
"useTabs": true,
"json": {
},
"markdown": {
},
"toml": {
},
"includes": ["**/*.{json,md,toml}"],
"excludes": [
"**/*-lock.json",
"**/target"
],
"plugins": [
"https://plugins.dprint.dev/json-0.15.6.wasm",
"https://plugins.dprint.dev/markdown-0.14.1.wasm",
"https://plugins.dprint.dev/toml-0.5.4.wasm"
]
}
Loading

0 comments on commit 391b1fa

Please sign in to comment.