Deploy Primary Network with Terraform and Ansible
Operate production Avalanche Primary Network validators on AWS with staking key management, database snapshots, and zero-downtime migration.
This guide covers deploying and operating Avalanche Primary Network validators — the nodes that validate the P-Chain, X-Chain, and C-Chain. These validators participate in Avalanche consensus and earn staking rewards.
AWS only. Staking minimum: 2,000 AVAX (mainnet), 1 AVAX (Fuji). Bootstrap time: 2–4 hours via state-sync. Cost: ~$326/month per validator.
Primary Network workflows are currently supported on AWS only. The instances require high-performance NVMe storage for the full chain database.
Architecture
Key Differences from L1 Deployment
| Aspect | L1 Deployment | Primary Network |
|---|---|---|
| Chain sync | Partial P-Chain headers only | Full P/X/C chain data |
| Instance type | c6a.xlarge (general compute) | i4i.xlarge (NVMe-optimized) |
| Storage | EBS gp3 volumes | 937GB local NVMe |
| Bootstrap time | Minutes (partial sync) | 2–4 hours (state-sync) |
| Cloud support | AWS, GCP, Azure | AWS only |
| Staking key backup | Optional | Strongly recommended |
Quick Start
Provision Infrastructure
make primary-infra CLOUD=awsThis uses a separate Terraform state from the L1 deployment (terraform/primary-network/aws/), creating i4i.xlarge instances with 937GB NVMe drives.
Edit terraform/primary-network/aws/terraform.tfvars before running:
primary_validator_count = 2
enable_staking_key_backup = true
ssh_public_key = "ssh-rsa AAAA..."
ssh_private_key_file = "~/.ssh/avalanche-deploy"Deploy Validators
make primary-deploy CLOUD=aws NETWORK=fuji # or mainnetThis runs playbook primary-network/deploy.yml, which:
- Installs AvalancheGo with Primary Network configuration
- Enables state-sync for fast initial bootstrap
- Waits for P/X/C chain bootstrap to complete (polls for up to 90 minutes)
- Backs up staking keys to S3 with KMS encryption
- Creates an initial database snapshot and uploads it to S3
Monitor Sync Progress
make primary-status CLOUD=awsBootstrap typically takes 2–4 hours via state-sync. All three chains (P, X, C) must report isBootstrapped: true before proceeding.
Register Your Validator on the P-Chain
Validator registration requires staking AVAX on the P-Chain:
- Fuji testnet: 1 AVAX minimum
- Mainnet: 2,000 AVAX minimum
Register using Core Wallet or the Avalanche CLI. You will need your validator's NodeID, which is displayed by make primary-status.
Back Up Staking Keys
make backup-keys CLOUD=awsStaking keys are uploaded to S3 with KMS encryption. The validator instances have an IAM role that grants access to the backup bucket — no manual credential configuration required.
Staking Key Management
Staking keys are the cryptographic identity of your validator. Losing them means losing your NodeID and any associated staking position.
# Backup all validator keys to S3
make backup-keys CLOUD=aws
# Restore keys to a specific node
make restore-keys CLOUD=aws SOURCE=primary-validator-1 TARGET_IP=10.0.1.50
# List existing backups
aws s3 ls s3://$(terraform -chdir=terraform/primary-network/aws output -raw staking_keys_bucket)/Always back up staking keys immediately after deployment and after any key rotation. Keys are encrypted with KMS — they cannot be read even if the S3 bucket is compromised without KMS access.
Database Snapshots
Create lz4-compressed snapshots of synced nodes for faster bootstrapping of new nodes. A pruned mainnet snapshot is approximately 400GB and restores in minutes compared to hours for state-sync.
# Create a snapshot from a synced validator
make create-snapshot CLOUD=aws NODE=primary-validator-1
# Create with a custom name
make create-snapshot CLOUD=aws NODE=primary-validator-1 NAME=mainnet-2025-02
# List available snapshots
make list-snapshots CLOUD=aws
# Restore a snapshot to a node
make restore-snapshot CLOUD=aws TARGET=migration-target
make restore-snapshot CLOUD=aws TARGET=migration-target SNAPSHOT=mainnet-2025-02Snapshots are stored in S3 with KMS encryption and SHA256 checksums for integrity verification.
Validator Migration
Migrate a validator to a new instance with approximately 30 seconds of downtime. This is useful for hardware upgrades, instance type changes, or region moves.
Migration Steps
# 1. Prepare the new node (choose one):
# Option A: From snapshot (faster — minutes)
make prepare-migration CLOUD=aws NODE=migration-target SNAPSHOT=true
# Option B: From state-sync (slower — hours)
make prepare-migration CLOUD=aws NODE=migration-target
# 2. Wait for the new node to fully sync
./scripts/primary-network/check-sync.sh <new-node-ip>
# 3. Execute migration (~30s downtime)
make migrate-validator CLOUD=aws SOURCE=primary-validator-1 TARGET=migration-targetCost Estimate (AWS us-east-1)
| Component | Instance | Storage | Monthly Estimate |
|---|---|---|---|
| Primary Validator | i4i.xlarge | 937GB NVMe (included) | ~$310 |
| S3 + KMS (keys + snapshots) | — | ~1GB | ~$1 |
| Monitoring | t3.small | 50GB EBS | ~$15 |
| Per validator total | ~$326/mo |
Terraform Configuration Reference
Edit terraform/primary-network/aws/terraform.tfvars:
| Variable | Default | Description |
|---|---|---|
primary_validator_count | 1 | Number of Primary Network validators |
enable_staking_key_backup | true | Enable S3 backup with KMS encryption |
ssh_public_key | — | SSH public key for node access |
ssh_private_key_file | — | Path to SSH private key |
Node runtime configuration is at configs/primary-network/node/primary-validator-node-config.json, which includes state-sync-enabled: true and state-sync-min-blocks: 100000 for faster initial bootstrap.
Next Steps
- Operations guide — Rolling upgrades, monitoring, health checks
- Troubleshooting — Common issues and solutions
Is this guide helpful?