L1 Add-Ons
Deploy Blockscout block explorer, faucets, The Graph, ICM Relayer, eRPC load balancer, and Safe multisig for your Avalanche L1.
After deploying your L1, you can add optional services to enhance the developer and operator experience. Each add-on is available for both deployment paths: Ansible playbooks (Docker Compose on VMs) and Helm charts (Kubernetes).
All add-on commands assume you have sourced your L1 environment: source l1.env
eRPC Load Balancer
eRPC is deployed automatically during make configure-l1. It provides a single RPC endpoint that load balances across your archive and pruned RPC nodes with intelligent routing.
Features
- Intelligent routing —
debug_*andtrace_*methods route to archive nodes only - Load balancing across all RPC nodes
- Automatic failover with circuit breaker
- Response caching
- Prometheus metrics
Endpoints
| Endpoint | URL |
|---|---|
| RPC | http://<monitoring-ip>:4000 |
| Health | http://<monitoring-ip>:4001/healthcheck |
Usage
Point your dApps and tools at the eRPC endpoint instead of individual nodes:
# Through eRPC (recommended)
curl -X POST http://<monitoring-ip>:4000 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'To skip eRPC during L1 configuration, add SKIP_ERPC=true:
make configure-l1 SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID SKIP_ERPC=trueTo redeploy eRPC standalone:
source l1.env
make erpc CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999source l1.env
make k8s-erpc CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999The Helm chart auto-discovers RPC upstreams from the l1-rpc service. Override with custom upstreams in values.yaml.
Blockscout Block Explorer
Deploy a full-featured block explorer for your L1:
source l1.env
make deploy-blockscout CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999 CHAIN_NAME="My L1"source l1.env
make k8s-blockscout CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999
# Access frontend
kubectl port-forward svc/blockscout-frontend 3000:3000Access: http://<archive-rpc-ip>:4001
Blockscout is deployed to the first archive RPC host (falls back to the first generic RPC host on GCP/Azure). It includes the backend indexer, frontend UI, stats service, and nginx reverse proxy.
Initial indexing can take hours for chains with significant history. Monitor progress with docker logs -f blockscout-backend on the RPC node.
Faucet
Deploy a token faucet for developers to request test tokens:
source l1.env
make faucet CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999 FAUCET_KEY=0x...source l1.env
make k8s-faucet CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999 FAUCET_KEY=0x...Access: http://<rpc-ip>:8010
| Parameter | Description |
|---|---|
CHAIN_ID | Blockchain ID from l1.env |
EVM_CHAIN_ID | EVM chain ID from genesis |
FAUCET_KEY | Hex private key of a funded wallet on your L1 |
The faucet wallet must be funded on your L1 chain. Use a dedicated wallet — not your deployer key.
The Graph Node
Deploy The Graph for indexing blockchain data via GraphQL subgraphs:
source l1.env
make graph-node CHAIN_ID=$CHAIN_ID NETWORK_NAME=my-l1source l1.env
make k8s-graph-node CHAIN_ID=$CHAIN_ID NETWORK_NAME=my-l1
# Access GraphQL
kubectl port-forward svc/graph-node 8000:8000Endpoints
| Endpoint | URL |
|---|---|
| GraphQL | http://<rpc-ip>:8000/subgraphs/name/<SUBGRAPH> |
| Admin | http://<rpc-ip>:8020 |
| IPFS | http://<rpc-ip>:5001 |
Deploying a Subgraph
After The Graph Node is running, deploy a subgraph:
# 1. Initialize your subgraph project
graph init --product hosted-service my-subgraph
# 2. Update subgraph.yaml with your L1 network
# network: my-l1
# source.address: "<CONTRACT_ADDRESS>"
# source.startBlock: 0
# 3. Generate types and build
graph codegen && graph build
# 4. Create and deploy
graph create --node http://<rpc-ip>:8020 my-subgraph
graph deploy --node http://<rpc-ip>:8020 \
--ipfs http://<rpc-ip>:5001 \
my-subgraphICM Relayer (Cross-Chain Messaging)
Deploy the Interchain Messaging Relayer for cross-chain communication between your L1 and C-Chain:
source l1.env
make icm-relayer SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID RELAYER_KEY=0x...Endpoints
| Endpoint | URL |
|---|---|
| API | http://<rpc-ip>:8080 |
| Health | http://<rpc-ip>:8080/health |
| Metrics | http://<rpc-ip>:9090/metrics |
How It Works
The ICM Relayer listens for Avalanche Warp Messages on source blockchains, aggregates BLS signatures from validators, and delivers cross-chain messages to destination blockchains. By default, it relays bidirectionally between your L1 and C-Chain.
Configuration
| Parameter | Default | Description |
|---|---|---|
SUBNET_ID | (required) | Subnet ID from l1.env |
CHAIN_ID | (required) | Blockchain ID from l1.env |
RELAYER_KEY | (required) | Hex private key for relay transactions |
NETWORK | fuji | Network name (fuji or mainnet) |
The relayer key wallet must be funded on both chains — AVAX on C-Chain for gas, and your L1's native token on the L1 chain. Use a dedicated relay wallet.
Kubernetes Deployment
make k8s-icm-relayer SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID RELAYER_KEY=0x...Safe Multisig
Deploy Gnosis Safe infrastructure for multisig governance of your L1:
make safeThis deploys the Safe UI, transaction service, client gateway, and nginx reverse proxy. It auto-detects chain configuration from l1.env.
source l1.env
make k8s-safe EVM_CHAIN_ID=99999 CHAIN_ID=$CHAIN_IDDeploys Config Service (CFG), Transaction Service (TXS), Client Gateway (CGW), PostgreSQL (x2), Redis, and RabbitMQ. An init job handles DB migrations, contract registration, and Celery periodic task setup.
Safe UI requires a custom Docker image with NEXT_PUBLIC_* variables baked in at build time. Set ui.image.repository and ui.image.tag in your Helm values.
Safe requires the Singleton Factory (0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7) in your genesis alloc. The default genesis template includes this.
For detailed Safe setup including contract deployment and chain registration, see the SAFE.md guide in the repository.
Add-On Summary
| Add-On | Ansible Playbook | Helm Chart | Ports |
|---|---|---|---|
| eRPC | l1/deploy-erpc.yml | helm/erpc | 4000, 4001 |
| Blockscout | l1/deploy-blockscout.yml | helm/blockscout | 3000, 4000 |
| Faucet | l1/deploy-faucet.yml | helm/faucet | 8010 |
| The Graph | l1/deploy-graph-node.yml | helm/graph-node | 8000, 8020, 5001 |
| ICM Relayer | l1/deploy-icm-relayer.yml | helm/icm-relayer | 8080, 9090 |
| Safe | l1/deploy-safe.yml | helm/safe | 3000, 8000, 8888 |
| Monitoring | shared/monitoring.yml | helm/monitoring | 3000, 9090 |
| Staking Key Backup | primary-network/backup-staking-keys.yml | helm/staking-key-backup | — |
Is this guide helpful?
Deploy an L1 on Kubernetes
Launch an Avalanche L1 blockchain on Kubernetes using Helm charts, with local kind clusters for development and testing.
Deploy Primary Network with Terraform and Ansible
Operate production Avalanche Primary Network validators on AWS with staking key management, database snapshots, and zero-downtime migration.