Firewood
Avalanche's purpose-built database stores the Merkle trie directly on disk, eliminating double indexing for predictable, low-latency state access.
Every SSTORE costs gas. Every SLOAD reads state. But what happens to that data after execution?
What if the database was built for the trie, not the other way around?
Double Indexing
Traditional databases flatten the trie into key-value pairs, then build a second index on top. Firewood eliminates the redundant layer.
Trie-as-Index
Firewood stores trie nodes at byte offsets on disk. The trie IS the index — no hash lookups needed.
No compaction. Ever.
LevelDB compaction creates backpressure that stalls writes and spikes latency. Firewood reclaims space inline via a Future-Delete Log — constant write performance.
N parallel threads.
State changes split by first nibble. Each subtrie hashes independently.
Copy-on-write revisions.
A new root copies the modified path only. Untouched subtrees stay shared by pointer across revisions.
Deferred persistence.
Commits are fast and in-memory. A background thread writes to disk — the permit count gates how far ahead commits can run before waiting for persistence.
Delete-log recycling.
Deleted nodes are added to a delete-log and their space is eventually reused. No compaction needed.
Archival footprint.
On-disk state for an archival C-Chain node. Same history, a fraction of the disk.
Choose your database.
LevelDB is the default AvalancheGo database. Firewood is an experimental backend under active development.
Archival footprint figures: early engineering measurements, C-Chain. Source: Firewood engineering team.
Where Firewood fits.
Firewood is the storage layer for StreVM, Avalanche's Streaming Async Execution engine.
* Firewood is currently experimental. LevelDB remains the default for AvalancheGo.
Is this guide helpful?