Skip to content

Commit

Permalink
EIP-4844: remove blob-verification precompile in favor of using the p…
Browse files Browse the repository at this point in the history
…oint-evaluation precompile everywhere (ethereum#5140)
  • Loading branch information
protolambda authored Jun 7, 2022
1 parent b69b44a commit f45bd0c
Showing 1 changed file with 3 additions and 27 deletions.
30 changes: 3 additions & 27 deletions EIPS/eip-4844.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,6 @@ Compared to full data sharding, this EIP has a reduced cap on the number of thes
| `KZG_SETUP_G2` | `Vector[G2Point, FIELD_ELEMENTS_PER_BLOB]`, contents TBD |
| `KZG_SETUP_LAGRANGE` | `Vector[KZGCommitment, FIELD_ELEMENTS_PER_BLOB]`, contents TBD |
| `BLOB_COMMITMENT_VERSION_KZG` | `Bytes1(0x01)` |
| `BLOB_VERIFICATION_PRECOMPILE_ADDRESS` | `Bytes20(0x13)` |
| `BLOB_VERIFICATION_PRECOMPILE_GAS` | `1800000` |
| `POINT_EVALUATION_PRECOMPILE_ADDRESS` | `Bytes20(0x14)` |
| `POINT_EVALUATION_PRECOMPILE_GAS` | `50000` |
| `MAX_BLOBS_PER_BLOCK` | `16` |
Expand Down Expand Up @@ -221,28 +219,6 @@ and returns `tx.message.blob_versioned_hashes[index]` if `index < len(tx.message
and otherwise zero.
The opcode has a gas cost of `HASH_OPCODE_GAS`.

### Blob verification precompile

Add a precompile at `BLOB_VERIFICATION_PRECOMPILE_ADDRESS` that checks a blob against a versioned hash.
The precompile costs `BLOB_VERIFICATION_PRECOMPILE_GAS` and executes the following logic:

```python
def blob_verification_precompile(input: Bytes) -> Bytes:
# First 32 bytes = expected versioned hash
expected_v_hash = input[:32]
assert expected_v_hash[:1] == BLOB_COMMITMENT_VERSION_KZG
# Remaining bytes = list of little-endian data points
assert len(input) == 32 + 32 * FIELD_ELEMENTS_PER_BLOB
input_points = [
int.from_bytes(input[i:i+32], 'little')
for i in range(32, len(input), 32)
]
assert kzg_to_versioned_hash(blob_to_kzg(input_points)) == expected_v_hash
return Bytes([])
```

Its logic is designed so that it is future-proof, and new versions of commitments can be added if needed.

### Point evaluation precompile

Add a precompile at `POINT_EVALUATION_PRECOMPILE_ADDRESS` that evaluates a proof that a particular blob resolves
Expand Down Expand Up @@ -413,8 +389,8 @@ to put the data into blobs. This guarantees availability (which is what rollups
Rollups need data to be available once, long enough to ensure honest actors can construct the rollup state, but not forever.

Optimistic rollups only need to actually provide the underlying data when fraud proofs are being submitted.
The fraud proof submission function would require the full contents of the fraudulent blob to be submitted as part of calldata.
It would use the blob verification function to verify the data against the versioned hash that was submitted before,
The fraud proof can verify the transition in smaller steps, loading at most a few values of the blob at a time through calldata.
For each value it would provide a KZG proof and use the point evaluation precompile to verify the value against the versioned hash that was submitted before,
and then perform the fraud proof verification on that data as is done today.

ZK rollups would provide two commitments to their transaction or state delta data:
Expand All @@ -426,7 +402,7 @@ to prove that the kzg (which the protocol ensures points to available data) and

We use versioned hashes (rather than kzgs) as references to blobs in the execution layer to ensure forward compatibility with future changes.
For example, if we need to switch to Merkle trees + STARKs for quantum-safety reasons, then we would add a new version,
allowing the precompiles to work with the new format.
allowing the point verification precompile to work with the new format.
Rollups would not have to make any EVM-level changes to how they work;
sequencers would simply have to switch over to using a new transaction type at the appropriate time.

Expand Down

0 comments on commit f45bd0c

Please sign in to comment.