ACP-267: Primary Network validator uptime requirement increases from 80% to 90%.Read the proposal

L1 Add-Ons

Deploy Blockscout block explorer, faucets, The Graph, ICM Relayer, eRPC load balancer, and Safe multisig for your Avalanche L1.

After deploying your L1, you can add optional services to enhance the developer and operator experience. Each add-on is available for both deployment paths: Ansible playbooks (Docker Compose on VMs) and Helm charts (Kubernetes).

All add-on commands assume you have sourced your L1 environment: source l1.env

eRPC Load Balancer

eRPC is deployed automatically during make configure-l1. It provides a single RPC endpoint that load balances across your archive and pruned RPC nodes with intelligent routing.

Features

  • Intelligent routingdebug_* and trace_* methods route to archive nodes only
  • Load balancing across all RPC nodes
  • Automatic failover with circuit breaker
  • Response caching
  • Prometheus metrics

Endpoints

EndpointURL
RPChttp://<monitoring-ip>:4000
Healthhttp://<monitoring-ip>:4001/healthcheck

Usage

Point your dApps and tools at the eRPC endpoint instead of individual nodes:

# Through eRPC (recommended)
curl -X POST http://<monitoring-ip>:4000 \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'

To skip eRPC during L1 configuration, add SKIP_ERPC=true:

make configure-l1 SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID SKIP_ERPC=true

To redeploy eRPC standalone:

source l1.env
make erpc CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999
source l1.env
make k8s-erpc CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999

The Helm chart auto-discovers RPC upstreams from the l1-rpc service. Override with custom upstreams in values.yaml.

Blockscout Block Explorer

Deploy a full-featured block explorer for your L1:

source l1.env
make deploy-blockscout CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999 CHAIN_NAME="My L1"
source l1.env
make k8s-blockscout CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999

# Access frontend
kubectl port-forward svc/blockscout-frontend 3000:3000

Access: http://<archive-rpc-ip>:4001

Blockscout is deployed to the first archive RPC host (falls back to the first generic RPC host on GCP/Azure). It includes the backend indexer, frontend UI, stats service, and nginx reverse proxy.

Initial indexing can take hours for chains with significant history. Monitor progress with docker logs -f blockscout-backend on the RPC node.

Faucet

Deploy a token faucet for developers to request test tokens:

source l1.env
make faucet CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999 FAUCET_KEY=0x...
source l1.env
make k8s-faucet CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999 FAUCET_KEY=0x...

Access: http://<rpc-ip>:8010

ParameterDescription
CHAIN_IDBlockchain ID from l1.env
EVM_CHAIN_IDEVM chain ID from genesis
FAUCET_KEYHex private key of a funded wallet on your L1

The faucet wallet must be funded on your L1 chain. Use a dedicated wallet — not your deployer key.

The Graph Node

Deploy The Graph for indexing blockchain data via GraphQL subgraphs:

source l1.env
make graph-node CHAIN_ID=$CHAIN_ID NETWORK_NAME=my-l1
source l1.env
make k8s-graph-node CHAIN_ID=$CHAIN_ID NETWORK_NAME=my-l1

# Access GraphQL
kubectl port-forward svc/graph-node 8000:8000

Endpoints

EndpointURL
GraphQLhttp://<rpc-ip>:8000/subgraphs/name/<SUBGRAPH>
Adminhttp://<rpc-ip>:8020
IPFShttp://<rpc-ip>:5001

Deploying a Subgraph

After The Graph Node is running, deploy a subgraph:

# 1. Initialize your subgraph project
graph init --product hosted-service my-subgraph

# 2. Update subgraph.yaml with your L1 network
#    network: my-l1
#    source.address: "<CONTRACT_ADDRESS>"
#    source.startBlock: 0

# 3. Generate types and build
graph codegen && graph build

# 4. Create and deploy
graph create --node http://<rpc-ip>:8020 my-subgraph
graph deploy --node http://<rpc-ip>:8020 \
  --ipfs http://<rpc-ip>:5001 \
  my-subgraph

ICM Relayer (Cross-Chain Messaging)

Deploy the Interchain Messaging Relayer for cross-chain communication between your L1 and C-Chain:

source l1.env
make icm-relayer SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID RELAYER_KEY=0x...

Endpoints

EndpointURL
APIhttp://<rpc-ip>:8080
Healthhttp://<rpc-ip>:8080/health
Metricshttp://<rpc-ip>:9090/metrics

How It Works

The ICM Relayer listens for Avalanche Warp Messages on source blockchains, aggregates BLS signatures from validators, and delivers cross-chain messages to destination blockchains. By default, it relays bidirectionally between your L1 and C-Chain.

Configuration

ParameterDefaultDescription
SUBNET_ID(required)Subnet ID from l1.env
CHAIN_ID(required)Blockchain ID from l1.env
RELAYER_KEY(required)Hex private key for relay transactions
NETWORKfujiNetwork name (fuji or mainnet)

The relayer key wallet must be funded on both chains — AVAX on C-Chain for gas, and your L1's native token on the L1 chain. Use a dedicated relay wallet.

Kubernetes Deployment

make k8s-icm-relayer SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID RELAYER_KEY=0x...

Safe Multisig

Deploy Gnosis Safe infrastructure for multisig governance of your L1:

make safe

This deploys the Safe UI, transaction service, client gateway, and nginx reverse proxy. It auto-detects chain configuration from l1.env.

source l1.env
make k8s-safe EVM_CHAIN_ID=99999 CHAIN_ID=$CHAIN_ID

Deploys Config Service (CFG), Transaction Service (TXS), Client Gateway (CGW), PostgreSQL (x2), Redis, and RabbitMQ. An init job handles DB migrations, contract registration, and Celery periodic task setup.

Safe UI requires a custom Docker image with NEXT_PUBLIC_* variables baked in at build time. Set ui.image.repository and ui.image.tag in your Helm values.

Safe requires the Singleton Factory (0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7) in your genesis alloc. The default genesis template includes this.

For detailed Safe setup including contract deployment and chain registration, see the SAFE.md guide in the repository.

Add-On Summary

Add-OnAnsible PlaybookHelm ChartPorts
eRPCl1/deploy-erpc.ymlhelm/erpc4000, 4001
Blockscoutl1/deploy-blockscout.ymlhelm/blockscout3000, 4000
Faucetl1/deploy-faucet.ymlhelm/faucet8010
The Graphl1/deploy-graph-node.ymlhelm/graph-node8000, 8020, 5001
ICM Relayerl1/deploy-icm-relayer.ymlhelm/icm-relayer8080, 9090
Safel1/deploy-safe.ymlhelm/safe3000, 8000, 8888
Monitoringshared/monitoring.ymlhelm/monitoring3000, 9090
Staking Key Backupprimary-network/backup-staking-keys.ymlhelm/staking-key-backup

Is this guide helpful?