ACP-267: Primary Network validator uptime requirement increases from 80% to 90%.Read the proposal

Firewood

Avalanche's purpose-built database stores the Merkle trie directly on disk, eliminating double indexing for predictable, low-latency state access.

Firewood

Purpose-Built Database for Merkleized State

Every SSTORE costs gas. Every SLOAD reads state. But what happens to that data after execution?

What if the database was built for the trie, not the other way around?

01
The Problem

Double Indexing

Traditional databases flatten the trie into key-value pairs, then build a second index on top. Firewood eliminates the redundant layer.

Traditional
EVM
SSTORE(key, value)
Merkle Patricia Trie
Serialize to KV pairs
LevelDB
Re-index in LSM-tree
LSM-Tree Levels
L0 → L1 → L2
Disk
Final write
Indices: 2
Firewood
EVM
SSTORE(key, value)
Firewood
Trie node at byte offset
Disk
Direct write
Indices: 1
02
The Solution

Trie-as-Index

Firewood stores trie nodes at byte offsets on disk. The trie IS the index — no hash lookups needed.

@0x0000@0x1A20@0x3100@0x52001.5 AVAX@0x2F40@0x35800xef...@0x4A8042@0x6B00root
LevelDB
Hash node key
Check memtable
Query bloom filters
Seek SSTable
Read data block
Repeat per trie level...
per trie level
Firewood
Read root @0x0000
Follow → @0x1A20
Follow → @0x2F40
1 pread() each
03
Key Features

No compaction. Ever.

LevelDB compaction creates backpressure that stalls writes and spikes latency. Firewood reclaims space inline via a Future-Delete Log — constant write performance.

COMPACTION
Latency per blockWrite LatencyBlocks
FirewoodLevelDB

N parallel threads.

State changes split by first nibble. Each subtrie hashes independently.

0
1
2
3
4
5
6
7
8
9
A
B
C
D
E
F
116N = 6
All active subtries hash simultaneously

Copy-on-write revisions.

A new root copies the modified path only. Untouched subtrees stay shared by pointer across revisions.

copied|shared by pointer|retention: 128 revs

Deferred persistence.

Commits are fast and in-memory. A background thread writes to disk — the permit count gates how far ahead commits can run before waiting for persistence.

Commitin-memory
STALLED
in-memory
Permits
Persistdisk I/O
on-disk
15N = 4
committedpersisting

Delete-log recycling.

Deleted nodes are added to a delete-log and their space is eventually reused. No compaction needed.

Trie
N1
N2
N3
N4
N5
expire
Delete Log
empty
reclaim
ReusedO(1) per node
waiting
activein delete-logreused
04
The Payoff

Archival footprint.

On-disk state for an archival C-Chain node. Same history, a fraction of the disk.

LevelDB
~16 TB~$1,920 / node
Firewood
~3 TB~$360 / node
~5.3× smaller~$1,560 saved / node (81%)
Storage: Firewood engineering team · Cost: mid-tier NVMe Gen 4 at ~$120/TB

Choose your database.

LevelDB is the default AvalancheGo database. Firewood is an experimental backend under active development.

Metric
LevelDB
Firewood
Type
Generic KV (LSM)
Purpose-built trie
Status
Default
Experimental
Trie Storage
Flattened to KV
Native on disk
Compaction
Required
None (FDL)
Write Amplification
High
Low
Parallel Merkle
No
Yes (16 subtries)
Proof Generation
Rebuild from KV
Native
Archival Footprint
~16 TB
~3 TB

Archival footprint figures: early engineering measurements, C-Chain. Source: Firewood engineering team.

Where Firewood fits.

Firewood is the storage layer for StreVM, Avalanche's Streaming Async Execution engine.

Consensus
Execution Queue
Block Executor
StreVM
Firewood
Disk

* Firewood is currently experimental. LevelDB remains the default for AvalancheGo.

FAQ

Is this guide helpful?