Deploy an L1 on Kubernetes
Launch an Avalanche L1 blockchain on Kubernetes using Helm charts, with local kind clusters for development and testing.
This guide covers deploying an Avalanche L1 on Kubernetes as an alternative to the Terraform + Ansible path. Use this when you already have a Kubernetes cluster or want local development with kind.
Requirements: kubectl, helm v3+, Docker (for kind). Time to deploy: ~15 minutes locally, ~30 minutes on a remote cluster (plus sync time).
Prerequisites
kubectlconnected to your clusterhelmv3+- For local testing:
kindand Docker - For L1 creation: funded key in platform-cli keystore
Helm Charts
| Chart | Path | Purpose |
|---|---|---|
avalanche-validator | helm/avalanche-validator | L1 validator nodes |
avalanche-rpc | helm/avalanche-rpc | L1 RPC nodes |
monitoring | helm/monitoring | Prometheus + Grafana |
icm-relayer | helm/icm-relayer | Cross-chain messaging |
erpc | helm/erpc | RPC load balancer with caching and failover |
faucet | helm/faucet | Token faucet for developers |
blockscout | helm/blockscout | Block explorer |
graph-node | helm/graph-node | The Graph Node for subgraph indexing |
safe | helm/safe | Safe multisig infrastructure |
staking-key-backup | helm/staking-key-backup | Automated staking key backup CronJob |
Quick Start with Local Kind Cluster
Create a Local Cluster
cd kubernetes
./scripts/create-kind-cluster.sh \
--name=avalanche-l1 \
--image=kindest/node:v1.34.0 \
--workers=1The first run pulls the node image and can take several minutes. If your machine is resource-constrained, start with --workers=1 and scale up later.
Deploy L1 Validators and RPC
helm upgrade --install l1-validators ./helm/avalanche-validator \
-f ./helm/avalanche-validator/values-kind.yaml \
--set network=fuji
helm upgrade --install l1-rpc ./helm/avalanche-rpc \
-f ./helm/avalanche-rpc/values-kind.yaml \
--set network=fujiWait for P-Chain Sync
./scripts/wait-for-sync.sh --release=l1-validatorsCreate Your L1
# Import or create a deployer key
platform keys import --name l1-deployer
platform keys default --name l1-deployer
./scripts/create-l1.sh \
--release=l1-validators \
--network=fuji \
--chain-name=mychain \
--output=l1.env \
--key-name=l1-deployerThe script collects NodeIDs from validator pods and runs the same P-Chain transactions as the Terraform path: CreateSubnetTx, CreateChainTx, and ConvertSubnetToL1Tx.
Configure Validators for Your L1
./scripts/configure-l1.sh --release=l1-validators --env=l1.envVerify Status
./scripts/status.sh --release=l1-validatorsDeploying on an Existing Cluster
Skip the kind cluster creation step and use the same Helm releases and scripts above. Ensure your cluster has sufficient resources for the validator and RPC pods.
Accessing RPC
# L1 RPC service
kubectl port-forward svc/l1-rpc 9650:9650
# Then query
curl -X POST http://localhost:9650/ext/bc/<CHAIN_ID>/rpc \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'Add-Ons on Kubernetes
All add-on services are available as Helm charts. After your L1 is running:
Monitoring
make k8s-monitoring
# Access Grafana
kubectl port-forward svc/monitoring-grafana 3000:3000
# http://localhost:3000 (admin/admin)eRPC Load Balancer
source l1.env
make k8s-erpc CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999Auto-discovers RPC upstreams from the l1-rpc service. Provides caching, circuit breaking, hedged requests, and failover.
Faucet
source l1.env
make k8s-faucet CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999 FAUCET_KEY=0x...Blockscout Block Explorer
source l1.env
make k8s-blockscout CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999
# Access frontend
kubectl port-forward svc/blockscout-frontend 3000:3000Deploys the full Blockscout stack: backend indexer, frontend, PostgreSQL, Redis, and optional smart contract verifier.
The Graph Node
source l1.env
make k8s-graph-node CHAIN_ID=$CHAIN_ID NETWORK_NAME=my-l1
# Access GraphQL
kubectl port-forward svc/graph-node 8000:8000ICM Relayer
source l1.env
make k8s-icm-relayer SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID RELAYER_KEY=0x...The relayer connects to the l1-rpc service by default. Override with --set avalanchego.serviceName=<svc>.
Safe Multisig
make k8s-safe EVM_CHAIN_ID=99999 CHAIN_ID=$CHAIN_IDDeploys Config Service, Transaction Service, Client Gateway, PostgreSQL (x2), Redis, and RabbitMQ. An init job handles DB migrations, contract registration, and indexer task setup.
Safe UI requires a custom Docker image with NEXT_PUBLIC_* variables baked in at build time. Set ui.image.repository and ui.image.tag in your Helm values to deploy a pre-built image.
Operations on Kubernetes
Health Checks
make k8s-health-checks # All nodes
make k8s-health-checks CHAIN_ID=$CHAIN_ID # Include L1 chain statusChecks pod status, /ext/health, P/X/C chain bootstrap, L1 sync, and node version consistency.
Staking Key Backup
# Deploy a daily backup CronJob to S3
make k8s-backup-keys BACKUP_BUCKET=my-bucket BACKUP_PROVIDER=s3Supports S3 and GCS. Use IRSA or Workload Identity for credential-free access on managed Kubernetes.
L1 Reset
make k8s-reset-l1Scales down pods, cleans chain data (preserves staking keys), removes L1 tracking config, and scales back up.
ValidatorManager Initialization
make k8s-init-validator-manager \
SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID \
CONVERSION_TX=<tx-hash> PROXY_ADDRESS=0x... EVM_CHAIN_ID=99999Port-forwards to an RPC pod and runs the Go initialization tool.
Make Wrappers
From the repo root, you can also use make targets:
| Command | Description |
|---|---|
make k8s-kind | Create local kind cluster |
make k8s-l1-deploy | Deploy L1 validators + RPC |
make k8s-l1-wait | Wait for P-Chain sync |
make k8s-l1-create | Create L1 chain |
make k8s-l1-configure | Configure validators for L1 |
make k8s-l1-status | Check L1 status |
make k8s-monitoring | Deploy monitoring stack |
make k8s-icm-relayer | Deploy ICM Relayer |
make k8s-erpc | Deploy eRPC load balancer |
make k8s-faucet | Deploy token faucet |
make k8s-blockscout | Deploy Blockscout block explorer |
make k8s-graph-node | Deploy The Graph Node |
make k8s-safe | Deploy Safe multisig infrastructure |
make k8s-backup-keys | Deploy staking key backup CronJob |
make k8s-health-checks | Run comprehensive health checks |
make k8s-reset-l1 | Reset L1 for redeployment |
make k8s-init-validator-manager | Initialize ValidatorManager contract |
make k8s-cleanup | Remove releases and optional PVC/kind cleanup |
Troubleshooting
Pods Stuck in Pending
kubectl describe pod <pod-name>If you see Insufficient cpu or does not have a host assigned, use the kind-specific value files:
helm upgrade --install l1-validators ./helm/avalanche-validator \
-f ./helm/avalanche-validator/values-kind.yaml --set network=fujiNode Not Syncing
kubectl logs <pod-name> -fKind Fails with "No Such Container"
This usually means the Docker daemon API is unhealthy. Restart Docker Desktop and retry ./scripts/create-kind-cluster.sh.
Cleanup
cd kubernetes
./scripts/cleanup.shThis removes Helm releases and optionally deletes PVCs and the kind cluster.
Next Steps
- Deploy with Terraform + Ansible instead — Full-featured deployment with archive/pruned RPC split and staking key backup
- Deploy add-ons — Blockscout, faucet, The Graph, ICM Relayer
- Operations guide — Upgrades, monitoring, health checks
- Troubleshooting — Common issues and solutions
Is this guide helpful?
Deploy an L1 with Terraform and Ansible
Launch a production-ready Avalanche L1 with validators, RPC nodes, and monitoring on AWS, GCP, or Azure using Terraform and Ansible.
L1 Add-Ons
Deploy Blockscout block explorer, faucets, The Graph, ICM Relayer, eRPC load balancer, and Safe multisig for your Avalanche L1.