ACP-267: Primary Network validator uptime requirement increases from 80% to 90%.Read the proposal

Deploy an L1 on Kubernetes

Launch an Avalanche L1 blockchain on Kubernetes using Helm charts, with local kind clusters for development and testing.

This guide covers deploying an Avalanche L1 on Kubernetes as an alternative to the Terraform + Ansible path. Use this when you already have a Kubernetes cluster or want local development with kind.

Requirements: kubectl, helm v3+, Docker (for kind). Time to deploy: ~15 minutes locally, ~30 minutes on a remote cluster (plus sync time).

Prerequisites

  • kubectl connected to your cluster
  • helm v3+
  • For local testing: kind and Docker
  • For L1 creation: funded key in platform-cli keystore

Helm Charts

ChartPathPurpose
avalanche-validatorhelm/avalanche-validatorL1 validator nodes
avalanche-rpchelm/avalanche-rpcL1 RPC nodes
monitoringhelm/monitoringPrometheus + Grafana
icm-relayerhelm/icm-relayerCross-chain messaging
erpchelm/erpcRPC load balancer with caching and failover
faucethelm/faucetToken faucet for developers
blockscouthelm/blockscoutBlock explorer
graph-nodehelm/graph-nodeThe Graph Node for subgraph indexing
safehelm/safeSafe multisig infrastructure
staking-key-backuphelm/staking-key-backupAutomated staking key backup CronJob

Quick Start with Local Kind Cluster

Create a Local Cluster

cd kubernetes

./scripts/create-kind-cluster.sh \
  --name=avalanche-l1 \
  --image=kindest/node:v1.34.0 \
  --workers=1

The first run pulls the node image and can take several minutes. If your machine is resource-constrained, start with --workers=1 and scale up later.

Deploy L1 Validators and RPC

helm upgrade --install l1-validators ./helm/avalanche-validator \
  -f ./helm/avalanche-validator/values-kind.yaml \
  --set network=fuji

helm upgrade --install l1-rpc ./helm/avalanche-rpc \
  -f ./helm/avalanche-rpc/values-kind.yaml \
  --set network=fuji

Wait for P-Chain Sync

./scripts/wait-for-sync.sh --release=l1-validators

Create Your L1

# Import or create a deployer key
platform keys import --name l1-deployer
platform keys default --name l1-deployer

./scripts/create-l1.sh \
  --release=l1-validators \
  --network=fuji \
  --chain-name=mychain \
  --output=l1.env \
  --key-name=l1-deployer

The script collects NodeIDs from validator pods and runs the same P-Chain transactions as the Terraform path: CreateSubnetTx, CreateChainTx, and ConvertSubnetToL1Tx.

Configure Validators for Your L1

./scripts/configure-l1.sh --release=l1-validators --env=l1.env

Verify Status

./scripts/status.sh --release=l1-validators

Deploying on an Existing Cluster

Skip the kind cluster creation step and use the same Helm releases and scripts above. Ensure your cluster has sufficient resources for the validator and RPC pods.

Accessing RPC

# L1 RPC service
kubectl port-forward svc/l1-rpc 9650:9650

# Then query
curl -X POST http://localhost:9650/ext/bc/<CHAIN_ID>/rpc \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'

Add-Ons on Kubernetes

All add-on services are available as Helm charts. After your L1 is running:

Monitoring

make k8s-monitoring

# Access Grafana
kubectl port-forward svc/monitoring-grafana 3000:3000
# http://localhost:3000 (admin/admin)

eRPC Load Balancer

source l1.env
make k8s-erpc CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999

Auto-discovers RPC upstreams from the l1-rpc service. Provides caching, circuit breaking, hedged requests, and failover.

Faucet

source l1.env
make k8s-faucet CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999 FAUCET_KEY=0x...

Blockscout Block Explorer

source l1.env
make k8s-blockscout CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999

# Access frontend
kubectl port-forward svc/blockscout-frontend 3000:3000

Deploys the full Blockscout stack: backend indexer, frontend, PostgreSQL, Redis, and optional smart contract verifier.

The Graph Node

source l1.env
make k8s-graph-node CHAIN_ID=$CHAIN_ID NETWORK_NAME=my-l1

# Access GraphQL
kubectl port-forward svc/graph-node 8000:8000

ICM Relayer

source l1.env
make k8s-icm-relayer SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID RELAYER_KEY=0x...

The relayer connects to the l1-rpc service by default. Override with --set avalanchego.serviceName=<svc>.

Safe Multisig

make k8s-safe EVM_CHAIN_ID=99999 CHAIN_ID=$CHAIN_ID

Deploys Config Service, Transaction Service, Client Gateway, PostgreSQL (x2), Redis, and RabbitMQ. An init job handles DB migrations, contract registration, and indexer task setup.

Safe UI requires a custom Docker image with NEXT_PUBLIC_* variables baked in at build time. Set ui.image.repository and ui.image.tag in your Helm values to deploy a pre-built image.

Operations on Kubernetes

Health Checks

make k8s-health-checks                     # All nodes
make k8s-health-checks CHAIN_ID=$CHAIN_ID  # Include L1 chain status

Checks pod status, /ext/health, P/X/C chain bootstrap, L1 sync, and node version consistency.

Staking Key Backup

# Deploy a daily backup CronJob to S3
make k8s-backup-keys BACKUP_BUCKET=my-bucket BACKUP_PROVIDER=s3

Supports S3 and GCS. Use IRSA or Workload Identity for credential-free access on managed Kubernetes.

L1 Reset

make k8s-reset-l1

Scales down pods, cleans chain data (preserves staking keys), removes L1 tracking config, and scales back up.

ValidatorManager Initialization

make k8s-init-validator-manager \
  SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID \
  CONVERSION_TX=<tx-hash> PROXY_ADDRESS=0x... EVM_CHAIN_ID=99999

Port-forwards to an RPC pod and runs the Go initialization tool.

Make Wrappers

From the repo root, you can also use make targets:

CommandDescription
make k8s-kindCreate local kind cluster
make k8s-l1-deployDeploy L1 validators + RPC
make k8s-l1-waitWait for P-Chain sync
make k8s-l1-createCreate L1 chain
make k8s-l1-configureConfigure validators for L1
make k8s-l1-statusCheck L1 status
make k8s-monitoringDeploy monitoring stack
make k8s-icm-relayerDeploy ICM Relayer
make k8s-erpcDeploy eRPC load balancer
make k8s-faucetDeploy token faucet
make k8s-blockscoutDeploy Blockscout block explorer
make k8s-graph-nodeDeploy The Graph Node
make k8s-safeDeploy Safe multisig infrastructure
make k8s-backup-keysDeploy staking key backup CronJob
make k8s-health-checksRun comprehensive health checks
make k8s-reset-l1Reset L1 for redeployment
make k8s-init-validator-managerInitialize ValidatorManager contract
make k8s-cleanupRemove releases and optional PVC/kind cleanup

Troubleshooting

Pods Stuck in Pending

kubectl describe pod <pod-name>

If you see Insufficient cpu or does not have a host assigned, use the kind-specific value files:

helm upgrade --install l1-validators ./helm/avalanche-validator \
  -f ./helm/avalanche-validator/values-kind.yaml --set network=fuji

Node Not Syncing

kubectl logs <pod-name> -f

Kind Fails with "No Such Container"

This usually means the Docker daemon API is unhealthy. Restart Docker Desktop and retry ./scripts/create-kind-cluster.sh.

Cleanup

cd kubernetes
./scripts/cleanup.sh

This removes Helm releases and optionally deletes PVCs and the kind cluster.

Next Steps

Is this guide helpful?