A Rust implementation of Verifiable Capacity-Bound Functions (Construction 1) from Ateniese, Chen, Francati, Papadopoulos, and Tang (PKC 2023), plus a storage proof service built on top of it.
A VCBF is a cryptographic primitive that forces anyone evaluating the function to read a minimum number of distinct bits from memory — no matter how much computation they throw at it. Think of it as the space analog of a Verifiable Delay Function (VDF): where VDFs guarantee minimum time, VCBFs guarantee minimum memory.
The key properties:
- Minimum capacity: The evaluator must read at least m distinct bits. Even an adversary with unlimited computation cannot cheat this.
- Efficient verification: Anyone can check the result using only a compact verification key (288 bytes). Verification takes ~1.7ms and is O(1) — constant regardless of how much data the evaluator had to read.
- Soundness: No computationally bounded adversary can forge a proof for a wrong answer.
Memory access has a physical cost that can't be optimized away. Unlike CPU operations where ASICs can be 200,000x faster than general-purpose hardware, off-chip memory accesses cost roughly the same energy whether you're using a CPU or an ASIC. This makes VCBFs useful in several contexts:
A storage provider claims to be holding your data. How do you verify without downloading it all back? Send a random challenge, get a proof back. If the provider deleted the data, they can't answer. This is what Filecoin-style systems need, and VCBFs provide it without random oracles or trusted hardware.
Traditional proof-of-work (hashing) is dominated by ASICs. VCBFs enforce memory access, which has comparable energy cost on ASICs and CPUs — leveling the playing field.
Each identity in a network must prove it controls dedicated memory. Since VCBF prevents time/space trade-offs, an attacker can't share memory across fake identities.
Unlike TEE-based attestation (SGX, TrustZone), VCBFs provide mathematical guarantees. No chip manufacturer in the trust chain. No side-channel attacks to worry about.
The construction is built on polynomial evaluation over finite fields with KZG commitments:
-
Setup: Sample a random degree-d polynomial over F_p. The coefficients are the data that must be stored. Generate a Structured Reference String (SRS) via trusted setup.
-
Eval: Given a random challenge point x, evaluate f(x) and produce a KZG proof. This requires reading all d+1 coefficients — the capacity bound.
-
Verify: Check the proof with two bilinear pairings. This reads only the compact verification key (4 group elements). O(1), independent of d.
The security argument uses Kolmogorov complexity — not Shannon entropy — which is why the bound holds even against adaptive adversaries who choose their strategy after seeing the polynomial.
src/
lib.rs Core library exports
errors.rs Error types
poly.rs Polynomial utilities (random sampling, synthetic division)
vc.rs KZG-based Verifiable Computation scheme
vcbf.rs VCBF Construction 1 (Setup / Eval / Verify)
service/
types.rs HTTP API types and serialization helpers
server.rs Storage provider (axum HTTP server)
client.rs Verifier client (reqwest)
bin/
setup.rs Trusted setup CLI
provider.rs Storage provider server binary
verifier.rs Verifier CLI binary
examples/
demo.rs Standalone demo with timing
tests/
integration.rs End-to-end protocol tests
benches/
benchmark.rs Criterion benchmarks
- Rust 1.70+ (
rustup update stable)
cargo run --release --example demo -- 1000This runs Setup, Eval, and Verify for a degree-1000 polynomial and prints timing for each step.
1. Trusted setup — generates keys:
cargo run --release --bin vcbf-setup -- --degree 1000This produces:
eval_key.bin(~79 KB) — the evaluation key, given to the storage providerverif_key.bin(288 bytes) — the verification key, kept by the verifiereval_key.json— parameter metadata
2. Start the storage provider — serves challenges:
cargo run --release --bin vcbf-provider -- --eval-key eval_key.bin --bind 127.0.0.1:30003. Verify storage — challenge the provider:
# Register (fetch and save the verification key)
cargo run --release --bin vcbf-verifier -- register
# Send a single challenge
cargo run --release --bin vcbf-verifier -- challenge
# Run a full audit (20 challenges)
cargo run --release --bin vcbf-verifier -- audit --count 20cargo testMeasured on Apple Silicon (release build):
| Operation | d=100 | d=1,000 | d=10,000 |
|---|---|---|---|
| Setup | ~4ms | ~85ms | ~805ms |
| Eval | ~2ms | ~12ms | ~88ms |
| Verify | 1.7ms | 1.7ms | 1.7ms |
Verification is constant time — two bilinear pairings, regardless of polynomial degree. This is the core asymmetry: the evaluator's work scales with d, but the verifier's does not.
| Endpoint | Method | Description |
|---|---|---|
/info |
GET | Returns degree, capacity, lambda, and hex-encoded verification key |
/challenge |
POST | Accepts {"challenge": "<hex>"}, returns {"value": "<hex>", "proof": "<hex>"} |
All cryptographic objects are serialized with arkworks' CanonicalSerialize and transmitted as hex strings.
- Curve: BLS12-381 (~128-bit security)
- Crypto: arkworks 0.5 (ark-ff, ark-ec, ark-poly, ark-bls12-381)
- Server: axum + tokio
- Client: reqwest + clap
VCBF guarantees that someone, somewhere must hold the full polynomial data to produce a valid proof. This is a physical fact enforced by the capacity bound — no amount of computation can bypass it.
Critically, VCBF does not prove which machine holds the data or where it is stored. A provider could forward challenges to a remote server, outsource to cloud storage, or delegate to another node. As long as a valid proof comes back, the data exists.
This is fine if your goal is data persistence. If you're paying someone to store your data and you just want to verify it hasn't been deleted, VCBF is exactly the right tool. You don't care about their infrastructure — local SSD, cloud bucket, distributed across continents — you care that the data is still retrievable. A valid proof means it is.
This matters if your goal is data locality. If you need to prove that a specific machine holds the data (e.g., anti-Sybil, dedicated resource commitment), VCBF alone is not sufficient. You would need additional mechanisms like response-time enforcement (local memory access is ~100ns vs. network RTT of 20-200ms) or per-provider key binding to close the gap.
- Trusted setup: The SRS generation uses a secret that must be destroyed. Production use requires a multi-party computation ceremony (like Zcash's Powers of Tau).
- Adversary class: The concrete security proof covers adversaries making a constant number of random memory accesses. The Kedlaya-Umans data structure means adversaries with non-constant accesses can evaluate from smaller memory, degrading the capacity bound.
- No amortization: Evaluating the VCBF on multiple challenges in parallel doesn't scale the capacity linearly.
- No rate limiting: The
/challengeendpoint triggers expensive computation. Production deployments should add rate limiting.
- Ateniese, Chen, Francati, Papadopoulos, Tang. Verifiable Capacity-bound Functions: A New Primitive from Kolmogorov Complexity. PKC 2023.
- Elkhiyaoui, Onen, Azraoui, Molva. Efficient techniques for publicly verifiable delegation of computation. ACM ASIACCS 2016.
- Kate, Zaverucha, Goldberg. Constant-Size Commitments to Polynomials and Their Applications. ASIACRYPT 2010.
This is a research implementation. Use at your own risk.